https://www.reliawiki.com/api.php?action=feedcontributions&user=Chuck+Smith&feedformat=atomReliaWiki - User contributions [en]2024-03-28T21:10:35ZUser contributionsMediaWiki 1.39.2https://www.reliawiki.com/index.php?title=ReliaSoft_Examples&diff=65573ReliaSoft Examples2020-10-12T20:56:18Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}}<br />
<br />
{| class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="20" align="center" width="600px"<br />
|-<br />
| valign="middle" align="center" bgcolor=EEEDF7|<span style="font-size:20px;">[[Weibull++ Examples|Weibull++ Examples]]</span><br />
|}<br />
<br><br />
{| class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="20" align="center" width="600px"<br />
|-<br />
| valign="middle" align="center" bgcolor=EEEDF7|<span style="font-size:20px;">[[ALTA_Examples|Weibull++ Accelerated Life Testing Module Examples]]</span><br />
|}<br />
<br><br />
{| class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="20" align="center" width="600px"<br />
|-<br />
| valign="middle" align="center" bgcolor=EEEDF7|<span style="font-size:20px;">[[BlockSim_Examples|BlockSim Examples]]</span><br />
|}<br />
<br><br />
{| class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="20" align="center" width="600px"<br />
|-<br />
| valign="middle" align="center" bgcolor=EEEDF7|<span style="font-size:20px;">[[XFMEA_Examples|XFMEA Examples]]</span><br />
|}<br />
<br><br />
{| class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="20" align="center" width="600px"<br />
|-<br />
| valign="middle" align="center" bgcolor=EEEDF7|<span style="font-size:20px;">[[RCM++_Examples|RCM++ Examples]]</span><br />
|}<br />
<br><br />
{| class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="20" align="center" width="600px"<br />
|-<br />
| valign="middle" align="center" bgcolor=EEEDF7|<span style="font-size:20px;">[[RGA_Examples|Weibull++ Reliability Growth Module Examples]]</span><br />
|}<br />
<br />
<br> {{Template:ReliaSoft Footer}}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=XFMEA_Examples&diff=65572XFMEA Examples2020-10-12T20:38:46Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
==Quick Start Guide==<br />
The ''XFMEA & RCM++ Quick Start Guide'' has been designed to help you explore many of the software's key features by working through step-by-step instructions for some practical application examples. The guide is available as a free *.PDF download.<br />
* [http://www.synthesisplatform.net/Xfmea/en/QS_Xfmea10.pdf Download the print-ready *.pdf file]<br />
<br />
<br />
==Examples==<br />
===Design for Reliability===<br />
*Creating a Customized DFR Planner -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 3]<br />
<br />
===Design FMEA===<br />
*DFMEA for a Single Light Pendant Chandelier -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 5]<br />
<br />
===Risk Discovery Analysis===<br />
*[http://www.reliawiki.org/index.php/Xfmea_Risk_Discovery_Analysis_Example Risk Discovery Analysis for a Multi-Function Printer]<br />
* Preliminary Risk Assessment for a Single Light Pendant Chandelier -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 4]<br />
<br />
===Failure Modes and Reliability Analysis===<br />
*Estimating System Reliability for a Single Light Pendant Chandelier -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 6]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Xfmea_Examples&diff=65571Xfmea Examples2020-10-12T20:38:06Z<p>Chuck Smith: Chuck Smith moved page Xfmea Examples to XFMEA Examples: New formatting of XFMEA name</p>
<hr />
<div>#REDIRECT [[XFMEA Examples]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=XFMEA_Examples&diff=65570XFMEA Examples2020-10-12T20:38:06Z<p>Chuck Smith: Chuck Smith moved page Xfmea Examples to XFMEA Examples: New formatting of XFMEA name</p>
<hr />
<div>[[Image:Xfmea_Examples_Banner.png|left|400px|link=Xfmea_Examples|alt=Xfmea Examples|Caption]] __NOTOC__<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Quick Start Guide==<br />
The ''Xfmea & RCM++ Quick Start Guide'' has been designed to help you explore many of the software's key features by working through step-by-step instructions for some practical application examples. The guide is available as a free *.PDF download.<br />
* [http://www.synthesisplatform.net/Xfmea/en/QS_Xfmea10.pdf Download the print-ready *.pdf file]<br />
<br />
<br />
==Examples==<br />
===Design for Reliability===<br />
*Creating a Customized DFR Planner -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 3]<br />
<br />
===Design FMEA===<br />
*DFMEA for a Single Light Pendant Chandelier -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 5]<br />
<br />
===Risk Discovery Analysis===<br />
*[http://www.reliawiki.org/index.php/Xfmea_Risk_Discovery_Analysis_Example Risk Discovery Analysis for a Multi-Function Printer]<br />
* Preliminary Risk Assessment for a Single Light Pendant Chandelier -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 4]<br />
<br />
===Failure Modes and Reliability Analysis===<br />
*Estimating System Reliability for a Single Light Pendant Chandelier -- [http://www.synthesis8.com/Xfmea/en/QS_Xfmea10.pdf See Chapter 6]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=ALTA_Examples&diff=65569ALTA Examples2020-10-12T20:37:20Z<p>Chuck Smith: Chuck Smith moved page ALTA Examples to Weibull++ Accelerated Life Testing Module Examples: Remove ALTA</p>
<hr />
<div>#REDIRECT [[Weibull++ Accelerated Life Testing Module Examples]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Accelerated_Life_Testing_Module_Examples&diff=65568Weibull++ Accelerated Life Testing Module Examples2020-10-12T20:37:20Z<p>Chuck Smith: Chuck Smith moved page ALTA Examples to Weibull++ Accelerated Life Testing Module Examples: Remove ALTA</p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
<br />
==Reference Examples==<br />
*[[ALTA_Reference_Examples|Weibull++ Accelerated Life Testing Module Reference Examples]] (demonstrate how Weibull++ Accelerated Life Testing module solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Single-Stress QALT Analysis===<br />
<br />
*Examples by Model<br />
** [[Arrhenius_Example|Arrhenius Model Example]]<br />
** [[Eyring_Example|Eyring Model Example]] <br />
** [[Inverse_Power_Law_Example|Inverse Power Law Model Example]]<br />
* [[ALTA_Standard_Folio_Plot_Type_Example|Weibull++ Accelerated Life Testing Module Standard Folio Plot Types]]<br />
* Analyzing Data from Accelerated Demonstration Test - View it in '''[http://www.reliasoft.com/alta/examples/rc2/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_2.html Video]'''<br />
* [[ACME Example|IPL-Weibull Analysis with Confidence Bounds on Plot]]<br />
* [[Mechanical Components Example|Examining the Parameters and Life vs. Stress Plot from an Arrhenius-Weibull Analysis]]<br />
* [[Electronic Components Example]]<br />
* [[Circuit Boards Example]]<br />
* [[Interval Data Example|Interval Data for Electronic Components]]<br />
* [[Paper Clip Example]]<br />
* Stability/Shelf Life Study - View it in '''[http://www.reliasoft.com/alta/examples/rc6/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_6.html Video]'''<br />
* [[Tensile Components Example]]<br />
<br />
===Multi-Stress QALT Analysis===<br />
*Examples by Model<br />
** [[Generalized_Eyring_Example|Generalized Eyring Model Example]]<br />
** [[T-H_Example|Temperature-Humidity Model Example]]<br />
** [[Temperature-Nonthermal_Relationship_Example|Temperature-NonThermal Model Example]]<br />
** [[General_Log-Linear_Relationship_Example|General Log-Linear Model Example]]<br />
** [[Proportional_Hazards_Medical_Data_Example|Proportional Hazards Model Example]]<br />
* [[Electronic Devices Example|Two-Stress Example for Electronic Devices]]<br />
* Using Indicator Variables - View it in '''[http://www.reliasoft.com/alta/examples/rc5/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_5.html Video]'''<br />
<br />
===QALT with Stress Profiles===<br />
* Automotive Part Test - View it in '''[http://www.reliasoft.com/alta/examples/rc4/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_4.html Video]'''<br />
* Multiple Time-Dependent Stresses - View it in '''[http://www.reliasoft.com/alta/examples/rc8/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_8.html Video]'''<br />
* Voltage Step-Stress Example - View it in '''[http://www.reliasoft.com/alta/examples/rc1/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_1.html Video]'''<br />
<br />
===Related Analyses===<br />
* [[Likelihood Ratio Test Example]]<br />
* Accelerated Degradation Analysis - View it in '''[http://www.reliasoft.com/alta/examples/rc3/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_4.html Video]'''<br />
* [[ALTA_Test_Plan_Example|Accelerated Life Test Plans - Single Stress Type]]<br />
* Accelerated Life Test Plans: Two Stress Types - View it in '''[http://www.reliasoft.com/alta/examples/rc7/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_7.html Video]'''<br />
*[[ALTA SimuMatic Example|Using Weibull++ Accelerated Life Testing Module SimuMatic for Test Design]]<br />
<br></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Accelerated_Life_Testing_Module_Examples&diff=65567Weibull++ Accelerated Life Testing Module Examples2020-10-12T20:37:04Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
<br />
==Reference Examples==<br />
*[[ALTA_Reference_Examples|Weibull++ Accelerated Life Testing Module Reference Examples]] (demonstrate how Weibull++ Accelerated Life Testing module solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Single-Stress QALT Analysis===<br />
<br />
*Examples by Model<br />
** [[Arrhenius_Example|Arrhenius Model Example]]<br />
** [[Eyring_Example|Eyring Model Example]] <br />
** [[Inverse_Power_Law_Example|Inverse Power Law Model Example]]<br />
* [[ALTA_Standard_Folio_Plot_Type_Example|Weibull++ Accelerated Life Testing Module Standard Folio Plot Types]]<br />
* Analyzing Data from Accelerated Demonstration Test - View it in '''[http://www.reliasoft.com/alta/examples/rc2/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_2.html Video]'''<br />
* [[ACME Example|IPL-Weibull Analysis with Confidence Bounds on Plot]]<br />
* [[Mechanical Components Example|Examining the Parameters and Life vs. Stress Plot from an Arrhenius-Weibull Analysis]]<br />
* [[Electronic Components Example]]<br />
* [[Circuit Boards Example]]<br />
* [[Interval Data Example|Interval Data for Electronic Components]]<br />
* [[Paper Clip Example]]<br />
* Stability/Shelf Life Study - View it in '''[http://www.reliasoft.com/alta/examples/rc6/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_6.html Video]'''<br />
* [[Tensile Components Example]]<br />
<br />
===Multi-Stress QALT Analysis===<br />
*Examples by Model<br />
** [[Generalized_Eyring_Example|Generalized Eyring Model Example]]<br />
** [[T-H_Example|Temperature-Humidity Model Example]]<br />
** [[Temperature-Nonthermal_Relationship_Example|Temperature-NonThermal Model Example]]<br />
** [[General_Log-Linear_Relationship_Example|General Log-Linear Model Example]]<br />
** [[Proportional_Hazards_Medical_Data_Example|Proportional Hazards Model Example]]<br />
* [[Electronic Devices Example|Two-Stress Example for Electronic Devices]]<br />
* Using Indicator Variables - View it in '''[http://www.reliasoft.com/alta/examples/rc5/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_5.html Video]'''<br />
<br />
===QALT with Stress Profiles===<br />
* Automotive Part Test - View it in '''[http://www.reliasoft.com/alta/examples/rc4/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_4.html Video]'''<br />
* Multiple Time-Dependent Stresses - View it in '''[http://www.reliasoft.com/alta/examples/rc8/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_8.html Video]'''<br />
* Voltage Step-Stress Example - View it in '''[http://www.reliasoft.com/alta/examples/rc1/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_1.html Video]'''<br />
<br />
===Related Analyses===<br />
* [[Likelihood Ratio Test Example]]<br />
* Accelerated Degradation Analysis - View it in '''[http://www.reliasoft.com/alta/examples/rc3/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_4.html Video]'''<br />
* [[ALTA_Test_Plan_Example|Accelerated Life Test Plans - Single Stress Type]]<br />
* Accelerated Life Test Plans: Two Stress Types - View it in '''[http://www.reliasoft.com/alta/examples/rc7/index.htm HTML]''' or '''[http://www.reliasoft.tv/alta/appexamples/alta_app_ex_7.html Video]'''<br />
*[[ALTA SimuMatic Example|Using Weibull++ Accelerated Life Testing Module SimuMatic for Test Design]]<br />
<br></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=RCM%2B%2B_Examples&diff=65566RCM++ Examples2020-10-12T20:35:31Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
==Quick Start Guide==<br />
The ''XFMEA & RCM++ Quick Start Guide'' has been designed to help you explore many of the software's key features by working through step-by-step instructions for some practical application examples. The guide is available as a free *.PDF download.<br />
* [http://www.synthesisplatform.net/RCM/en/QS_RCM10.pdf Download the print-ready *.pdf file]<br />
<br />
<br />
==Examples==<br />
===Design for Reliability===<br />
*Creating a Customized DFR Planner -- ''See Chapter 3 in the Quick Start Guide''<br />
<br />
===Reliability Centered Maintenance===<br />
*Traditional RCM Analysis for a Conveyor Belt -- ''See Chapter 8 in the Quick Start Guide''<br />
<br />
===Risk Discovery Analysis===<br />
*[http://www.reliawiki.org/index.php/Xfmea_Risk_Discovery_Analysis_Example Risk Discovery Analysis for a Multi-Function Printer]<br />
* Preliminary Risk Assessment for a Single Light Pendant Chandelier -- ''See Chapter 4 in the Quick Start Guide''<br />
<br />
===Failure Modes and Reliability Analysis===<br />
*Estimating System Reliability for a Single Light Pendant Chandelier -- ''See Chapter 6 in the Quick Start Guide''<br />
*Estimating Availability and Maintenance Costs with the FMRA Tool -- ''See Chapter 7 in the Quick Start Guide''<br />
<br />
===FMEA===<br />
*DFMEA for a Single Light Pendant Chandelier -- ''See Chapter 5 in the Quick Start Guide''</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=RGA_Examples&diff=65565RGA Examples2020-10-12T20:34:31Z<p>Chuck Smith: Chuck Smith moved page RGA Examples to Weibull++ Reliability Growth Module Examples: Remove RGA</p>
<hr />
<div>#REDIRECT [[Weibull++ Reliability Growth Module Examples]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Reliability_Growth_Module_Examples&diff=65564Weibull++ Reliability Growth Module Examples2020-10-12T20:34:31Z<p>Chuck Smith: Chuck Smith moved page RGA Examples to Weibull++ Reliability Growth Module Examples: Remove RGA</p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
==Reference Examples==<br />
*[[RGA_Reference_Examples|Weibull++ Reliability Growth Module Reference Examples]] (demonstrate how Weibull++ Reliability Growth module solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Traditional Reliability Growth Analysis===<br />
*Simple MTBF Determination. View it in '''[http://www.reliasoft.com/rga/examples/rgex1/index.htm HTML]'''<br />
*Analyzing Software Reliability Growth. View it in '''[http://www.reliasoft.com/rga/examples/rgex5/index.htm HTML]'''<br />
*Examples by Growth Model:<br />
:*[[Duane Model Examples|Duane]]<br />
:*[[Crow-AMSAA Model Examples|Crow-AMSAA (NHPP)]]<br />
:*[[Crow-AMSAA Grouped Data Examples|Crow-AMSAA (Grouped Data)]]<br />
:*[[Lloyd-Lipow_Model_Examples|Lloyd-Lipow]]<br />
:*[[Gompertz_Model_Examples|Standard and Modified Gompertz]]<br />
:*[[Logistic_Model_Examples|Logistic]]<br />
*[[Gap Analysis Example|Gap Analysis]]<br />
*[[Change of Slope Analysis Example|Change of Slope Analysis]]<br />
*[[Failure Discounting Example]]<br />
<br />
===Crow Extended Model Examples===<br />
*Failure Times Data. View it in '''[http://www.reliasoft.com/rga/examples/rgex3/index.htm HTML]'''<br />
*[[Grouped_Data_-_Crow_Extended_Example|Grouped Data]]<br />
*[[Mixed_Data_-_Crow_Extended_Example|Mixed Data]]<br />
*[[Equivalent_System_Example|Multiple Systems with Event Codes]]<br />
*[[Concurrent_Operating_Times_-_Crow_Extended_Example|Multiple Systems - Concurrent Operating Times]]<br />
*[[Known_Operating_Times_-_Crow_Extended_Example|Multiple Systems - Known Operating Times]]<br />
<br />
===Reliability Growth Planning===<br />
*Multi-Phase Planning and Analysis. View it in '''[http://www.reliasoft.com/rga/examples/rgex6/index.htm HTML]'''<br />
*[[Multi-Phase_-_Mixed_Data|Multi-Phase - Mixed Data Example]]<br />
*[[Growth Plan for Three Phases]]<br />
*[[Growth Plan for Four Phases]]<br />
*[[Growth Plan for Seven Phases]]<br />
<br />
===Mission Profiles===<br />
*Mission Profile Testing. View it in '''[http://www.reliasoft.com/rga/examples/rgex7/index.htm HTML]'''<br />
<br />
===Repairable Systems Analysis===<br />
*[[Crow_Extended_Model_for_Repairable_Systems_Analysis_Example|Simple Repairable Systems Analysis]]<br />
*Fielded Systems Example. View it in '''[http://www.reliasoft.com/rga/examples/rgex4/index.htm HTML]'''<br />
*[[Auto Transmission Example|Auto Transmission Example]]<br />
*[[Optimum Overhaul Example|Optimum Overhaul Example]]<br />
*[[Crow_Extended_Model_Fleet_Analysis_Example|Fleet Analysis Example]]<br />
*[[Fleet Analysis Example|Fleet Analysis System Operations Plot Example]]<br />
<br />
===Utilities===<br />
*[[RGA_Monte_Carlo_Simulation_Example|Monte Carlo Simulation Example]]<br />
*[[RGA_SimuMatic_Example|SimuMatic Simulation Example]]<br />
*Repairable System Test Design Examples:<br />
:*[[Repairable_System_Test_Design_Example_-_Solve_for_Time|Solve for Time]]<br />
:*[[Repairable_System_Test_Design_Example_-_Solve_for_Sample_Size|Solve for Sample Size]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Reliability_Growth_Module_Examples&diff=65563Weibull++ Reliability Growth Module Examples2020-10-12T20:33:06Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
==Reference Examples==<br />
*[[RGA_Reference_Examples|Weibull++ Reliability Growth Module Reference Examples]] (demonstrate how Weibull++ Reliability Growth module solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Traditional Reliability Growth Analysis===<br />
*Simple MTBF Determination. View it in '''[http://www.reliasoft.com/rga/examples/rgex1/index.htm HTML]'''<br />
*Analyzing Software Reliability Growth. View it in '''[http://www.reliasoft.com/rga/examples/rgex5/index.htm HTML]'''<br />
*Examples by Growth Model:<br />
:*[[Duane Model Examples|Duane]]<br />
:*[[Crow-AMSAA Model Examples|Crow-AMSAA (NHPP)]]<br />
:*[[Crow-AMSAA Grouped Data Examples|Crow-AMSAA (Grouped Data)]]<br />
:*[[Lloyd-Lipow_Model_Examples|Lloyd-Lipow]]<br />
:*[[Gompertz_Model_Examples|Standard and Modified Gompertz]]<br />
:*[[Logistic_Model_Examples|Logistic]]<br />
*[[Gap Analysis Example|Gap Analysis]]<br />
*[[Change of Slope Analysis Example|Change of Slope Analysis]]<br />
*[[Failure Discounting Example]]<br />
<br />
===Crow Extended Model Examples===<br />
*Failure Times Data. View it in '''[http://www.reliasoft.com/rga/examples/rgex3/index.htm HTML]'''<br />
*[[Grouped_Data_-_Crow_Extended_Example|Grouped Data]]<br />
*[[Mixed_Data_-_Crow_Extended_Example|Mixed Data]]<br />
*[[Equivalent_System_Example|Multiple Systems with Event Codes]]<br />
*[[Concurrent_Operating_Times_-_Crow_Extended_Example|Multiple Systems - Concurrent Operating Times]]<br />
*[[Known_Operating_Times_-_Crow_Extended_Example|Multiple Systems - Known Operating Times]]<br />
<br />
===Reliability Growth Planning===<br />
*Multi-Phase Planning and Analysis. View it in '''[http://www.reliasoft.com/rga/examples/rgex6/index.htm HTML]'''<br />
*[[Multi-Phase_-_Mixed_Data|Multi-Phase - Mixed Data Example]]<br />
*[[Growth Plan for Three Phases]]<br />
*[[Growth Plan for Four Phases]]<br />
*[[Growth Plan for Seven Phases]]<br />
<br />
===Mission Profiles===<br />
*Mission Profile Testing. View it in '''[http://www.reliasoft.com/rga/examples/rgex7/index.htm HTML]'''<br />
<br />
===Repairable Systems Analysis===<br />
*[[Crow_Extended_Model_for_Repairable_Systems_Analysis_Example|Simple Repairable Systems Analysis]]<br />
*Fielded Systems Example. View it in '''[http://www.reliasoft.com/rga/examples/rgex4/index.htm HTML]'''<br />
*[[Auto Transmission Example|Auto Transmission Example]]<br />
*[[Optimum Overhaul Example|Optimum Overhaul Example]]<br />
*[[Crow_Extended_Model_Fleet_Analysis_Example|Fleet Analysis Example]]<br />
*[[Fleet Analysis Example|Fleet Analysis System Operations Plot Example]]<br />
<br />
===Utilities===<br />
*[[RGA_Monte_Carlo_Simulation_Example|Monte Carlo Simulation Example]]<br />
*[[RGA_SimuMatic_Example|SimuMatic Simulation Example]]<br />
*Repairable System Test Design Examples:<br />
:*[[Repairable_System_Test_Design_Example_-_Solve_for_Time|Solve for Time]]<br />
:*[[Repairable_System_Test_Design_Example_-_Solve_for_Sample_Size|Solve for Sample Size]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Reliability_Growth_Module_Examples&diff=65562Weibull++ Reliability Growth Module Examples2020-10-12T20:32:55Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Reference Examples==<br />
*[[RGA_Reference_Examples|Weibull++ Reliability Growth Module Reference Examples]] (demonstrate how Weibull++ Reliability Growth module solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Traditional Reliability Growth Analysis===<br />
*Simple MTBF Determination. View it in '''[http://www.reliasoft.com/rga/examples/rgex1/index.htm HTML]'''<br />
*Analyzing Software Reliability Growth. View it in '''[http://www.reliasoft.com/rga/examples/rgex5/index.htm HTML]'''<br />
*Examples by Growth Model:<br />
:*[[Duane Model Examples|Duane]]<br />
:*[[Crow-AMSAA Model Examples|Crow-AMSAA (NHPP)]]<br />
:*[[Crow-AMSAA Grouped Data Examples|Crow-AMSAA (Grouped Data)]]<br />
:*[[Lloyd-Lipow_Model_Examples|Lloyd-Lipow]]<br />
:*[[Gompertz_Model_Examples|Standard and Modified Gompertz]]<br />
:*[[Logistic_Model_Examples|Logistic]]<br />
*[[Gap Analysis Example|Gap Analysis]]<br />
*[[Change of Slope Analysis Example|Change of Slope Analysis]]<br />
*[[Failure Discounting Example]]<br />
<br />
===Crow Extended Model Examples===<br />
*Failure Times Data. View it in '''[http://www.reliasoft.com/rga/examples/rgex3/index.htm HTML]'''<br />
*[[Grouped_Data_-_Crow_Extended_Example|Grouped Data]]<br />
*[[Mixed_Data_-_Crow_Extended_Example|Mixed Data]]<br />
*[[Equivalent_System_Example|Multiple Systems with Event Codes]]<br />
*[[Concurrent_Operating_Times_-_Crow_Extended_Example|Multiple Systems - Concurrent Operating Times]]<br />
*[[Known_Operating_Times_-_Crow_Extended_Example|Multiple Systems - Known Operating Times]]<br />
<br />
===Reliability Growth Planning===<br />
*Multi-Phase Planning and Analysis. View it in '''[http://www.reliasoft.com/rga/examples/rgex6/index.htm HTML]'''<br />
*[[Multi-Phase_-_Mixed_Data|Multi-Phase - Mixed Data Example]]<br />
*[[Growth Plan for Three Phases]]<br />
*[[Growth Plan for Four Phases]]<br />
*[[Growth Plan for Seven Phases]]<br />
<br />
===Mission Profiles===<br />
*Mission Profile Testing. View it in '''[http://www.reliasoft.com/rga/examples/rgex7/index.htm HTML]'''<br />
<br />
===Repairable Systems Analysis===<br />
*[[Crow_Extended_Model_for_Repairable_Systems_Analysis_Example|Simple Repairable Systems Analysis]]<br />
*Fielded Systems Example. View it in '''[http://www.reliasoft.com/rga/examples/rgex4/index.htm HTML]'''<br />
*[[Auto Transmission Example|Auto Transmission Example]]<br />
*[[Optimum Overhaul Example|Optimum Overhaul Example]]<br />
*[[Crow_Extended_Model_Fleet_Analysis_Example|Fleet Analysis Example]]<br />
*[[Fleet Analysis Example|Fleet Analysis System Operations Plot Example]]<br />
<br />
===Utilities===<br />
*[[RGA_Monte_Carlo_Simulation_Example|Monte Carlo Simulation Example]]<br />
*[[RGA_SimuMatic_Example|SimuMatic Simulation Example]]<br />
*Repairable System Test Design Examples:<br />
:*[[Repairable_System_Test_Design_Example_-_Solve_for_Time|Solve for Time]]<br />
:*[[Repairable_System_Test_Design_Example_-_Solve_for_Sample_Size|Solve for Sample Size]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=BlockSim_Examples&diff=65561BlockSim Examples2020-10-12T20:31:44Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
<br />
==Reference Examples==<br />
*[[BlockSim_Reference_Examples|BlockSim Reference Examples]] (demonstrate how BlockSim solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Analytical Diagrams and Optimization===<br />
* Reliability Analysis of a Storage Cluster System - View it in '''[http://www.reliasoft.com/BlockSim/examples/rc1/index.htm HTML]''' or '''[http://www.reliasoft.tv/blocksim/appexamples/blocksim_app_ex_1.html Video]'''<br />
* Optimized Reliability Allocation - View it in '''[http://www.reliasoft.com/BlockSim/examples/rc2/index.htm HTML]''' or '''[http://www.reliasoft.tv/blocksim/appexamples/blocksim_app_ex_2.html Video]'''<br />
*[http://www.reliasoft.com/BlockSim/examples/rc4/index.htm Using RBDs for Modeling Failure Modes]<br />
*[[BlockSim_Analytical_Fault_Tree_and_RBD_Plot_Examples| Analytical Diagram Plots]]<br />
*[[BlockSim_Allocation_Analysis_Example| Allocation Analysis]]<br />
*[[Time-Dependent System Reliability for Components in Series]]<br />
*[[Time-Dependent System Reliability for Components in Parallel]]<br />
*[[Example_Using_a_Distribution_to_Approximate_the_CDF|Using a Distribution to Approximate the System cdf]]<br />
*[[Example Calculating System Reliability with Duty Cycles|Calculating System Reliability with Duty Cycles]]<br />
*[[Load Sharing Configuration Example|Load Sharing Configuration Example - Failure Modes Analysis]]<br />
*[[Standby Configuration Example|Standby Configuration Example - Car Tires]]<br />
*[[Reliability_Importance_Example|Reliability Importance Plot Example]]<br />
<br />
===Fault Tree Diagrams===<br />
*Simple Examples for Fault Tree Gates<br />
**[[AND Gate Example]]<br />
**[[OR Gate Example]]<br />
**[[Voting OR Gate Example]]<br />
**[[Example_Using_Load_Sharing_Gates_in_Fault_Trees|Load Sharing Gate Example]]<br />
**[[Standby Gate Example]]<br />
*[[Same Example Modeled with RBDs or Fault Trees|Same Example Modeled with RBDs or Fault Trees (Modeling Failure Modes)]]<br />
**[http://www.reliasoft.com/BlockSim/examples/rc6/index.htm Using Fault Trees for Modeling Failure Modes]<br />
<br />
===Simulation Diagrams===<br />
*[http://www.reliasoft.com/BlockSim/examples/rc5/index.htm RAM Analysis for Remote Telecommunications System]<br />
*[http://www.reliasoft.com/BlockSim/examples/rc3/index.htm Effect of Inspection Intervals]<br />
*[[BlockSim_Example:_CM_Triggered_by_Subsystem_Down|Corrective Maintenance Triggered by Failure of a Subsystem]]<br />
*[[Life Cycle Cost Analysis Example]]<br />
<br />
===Phase Diagrams===<br />
*[[BlockSim_Example:_Aircraft_Phases_with_Forced_Landing|Aircraft Phases with Forced Landing]]<br />
*[[BlockSim_Phase_Simulation_Plot_Examples|Oil Refinery Phase Diagram Simulation]]<br />
*[[Example Using Success Failure Paths in Phase Diagrams|Using Success/Failure Paths in Phase Diagrams]]<br />
*[[Example Using Subdiagram Phase Blocks|Using Subdiagram Phase Blocks]]<br />
*[[Interval Maintenance Threshold Example|Using an Interval Maintenance Threshold in a Maintenance Phase]]<br />
*[[Phase Throughput Examples]]<br />
<br />
===State Change Triggers (SCTs)===<br />
*[[Example Using SCT for Standby Rotation|Using State Change Triggers for Standby Rotation]]<br />
*[[Example_Using_SCT_to_Analyze_Tire_Maintenance|Using State Change Triggers to Analyze Tire Maintenance]]<br />
*[[Example_Using_SCT_to_Analyze_Standby_with_Delay|Using State Change Triggers to Model Standby with Delay]]<br />
*[[Example_Using_SCT_to_Model_Two_Standby_Blocks|Using State Change Triggers to Model Two Standby Devices]]<br />
*[[Example_Demonstrating_the_State_Upon_Repair_Option_for_SCT|SCT: The State Upon Repair Option]]<br />
*[[BlockSim_Example:_Default_OFF_unless_SCT_Overridden|Demonstrating the "Default OFF unless SCT overridden" option]]<br />
*[[BlockSim_Example:_Default_ON_unless_SCT_Overridden|Demonstrating the "Default ON unless SCT overridden" option]]<br />
<br />
===Throughput Analysis===<br />
*[[Simple_Throughput_Analysis_Example|Simple Throughput Analysis Example]]<br />
<br />
===Utilities===<br />
*[[Overlay_Plot_Example|Overlay Plots]]<br />
*[[Optimum Replacement Time Example]]<br />
*[[BlockSim_Analytical_FRED_Report_Example| Analytical FRED Report]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Examples&diff=65560Weibull++ Examples2020-10-12T20:31:26Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}} __NOTOC__<br />
<br />
==Reference Examples==<br />
*[[Weibull%2B%2B_Reference_Examples|Weibull++ Reference Examples]] (demonstrate how Weibull++ solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Parametric Life Data Analysis===<br />
*Simple Probability Plotting Examples: [[Probability Plotting Example|Simple Example]], [[3-Parameter_Weibull_Example|3P-Weibull Example]], [[1P_Exponential_Example|1P-Exponential]], [[Normal Distribution Probability Plotting Example|Normal]], [[Example:_Lognormal_Distribution_Probability_Plot|Lognormal]]<br />
*[[Standard_Folio_Plots|Standard Folio Plots]]<br />
* Competing Failure Modes Analysis<br />
:* Two Failure Modes Example. View it in '''[http://www.reliasoft.com/Weibull/examples/rc10/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_10.html Video]'''<br />
:* [[Complex_Failure_Modes_Example|Complex Failure Modes Example]]<br />
<br />
===Non-Parametric Life Data Analysis===<br />
*[[Weibull++ Non-Parametric LDA Plot Example|Kaplan-Meier Method]]<br />
*Simple Actuarial Method. View it in '''[http://www.reliasoft.com/Weibull/examples/rc7/index.htm HTML]'''<br />
<br />
===Degradation Data Analysis===<br />
* Crack Propagation Example (Point Estimation). View it in '''[http://www.reliasoft.com/Weibull/examples/rc4/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_4.html Video]'''<br />
<br />
===Recurrent Event Data Analysis===<br />
*[[Example:_Parametric_RDA_-_Air_Condition_Unit|Parametric RDA - Aircraft Air Condition Unit]]<br />
*[[Non_Parametric_RDA_MCF_Example|Non-Parametric RDA - Mean Cumulative Function (MCF) Example]]<br />
*Non-Parametric RDA - Manual Transmission. View it in '''[http://www.reliasoft.com/Weibull/examples/rc8/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_8.html Video]'''<br />
<br />
===Stress-Strength Analysis and Life Comparison===<br />
* [[Stress-Strength_Parameter_Uncertainty_Example|Stress-Strength Analysis Example]]<br />
* [[Stress-Strength_Analysis_in_Design_for_Reliability|Stress-Strength Analysis in DFR - Target Reliability Parameter Estimator]]<br />
* [[Life_Comparison_Examples|Life Comparison Examples: Using Contour Plots or Life Comparison Tool]]<br />
<br />
===Warranty Data Analysis===<br />
* [[Warranty_Data_Analysis_Dates_Format_Example|Dates of Failure Format Warranty Analysis]]<br />
* Nevada Chart Format Warranty Analysis. View it in '''[http://www.reliasoft.com/Weibull/examples/rc5/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_5.html Video]'''<br />
* [[Warranty_Analysis_Non-Homogeneous_Data_Example|Non-Homogeneous Data Warranty Analysis]]<br />
* [[Non-Homogeneous_Data_with_Subset_IDs_Example|Statistical Process Control Example]]<br />
* [[Warranty_Data_Analysis_Times-to-Failure_Format_with_Plot_Example|Times-to-Failure Format Warranty Analysis]]<br />
* [[Warranty_Analysis_Usage_Format_Example|Usage-Based Format Warranty Analysis]]<br />
<br />
===Test Design Examples===<br />
* Reliability Demonstration Test (RDT) Design<br />
**[[Parametric_Binomial_Example_-_Demonstrate_Reliability|Parametric Binomial - Test to Demonstrate Reliability]]<br />
** [[Parametric_Binomial_Example_-_Demonstrate_MTTF|Parametric Binomial - Test to Demonstrate MTTF]]<br />
** [[Non-Parametric Binomial Test Design Example|Non-Parametric Binomial Test Design]]<br />
** [[Exponential_Chi-Squared_Example|Exponential Chi-Squared Test Design]]<br />
** [[Non-Parametric Bayesian - Expert Opinion|Non-Parametric Bayesian with Prior Information from Expert Opinion]]<br />
** [[Non-Parametric Bayesian - Subsystem Tests|Non-Parametric Bayesian with Prior Information from Subsystem Tests]]<br />
* [[Expected Failure Times Plot Example]]<br />
* [[Difference Detection Matrix Example]]<br />
<br />
===Utilities===<br />
* [[Weibull%2B%2B_Equation_Fit_Solver_Example|Equation Fit Solver Example]]<br />
* Event Log Folio. View it in '''[http://www.reliasoft.com/Weibull/examples/rc6/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_6.html Video]'''<br />
<br />
* [[Maintenance Planning Example]]<br />
* [[Monte Carlo Simulation Example]]<br />
* [[Target Reliability Tool Example]]<br />
* [[Simulation_Based_Bounds_Example|SimuMatic Example]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=BlockSim_Examples&diff=65559BlockSim Examples2020-10-12T20:30:58Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}}<br />
<br />
<br />
==Reference Examples==<br />
*[[BlockSim_Reference_Examples|BlockSim Reference Examples]] (demonstrate how BlockSim solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Analytical Diagrams and Optimization===<br />
* Reliability Analysis of a Storage Cluster System - View it in '''[http://www.reliasoft.com/BlockSim/examples/rc1/index.htm HTML]''' or '''[http://www.reliasoft.tv/blocksim/appexamples/blocksim_app_ex_1.html Video]'''<br />
* Optimized Reliability Allocation - View it in '''[http://www.reliasoft.com/BlockSim/examples/rc2/index.htm HTML]''' or '''[http://www.reliasoft.tv/blocksim/appexamples/blocksim_app_ex_2.html Video]'''<br />
*[http://www.reliasoft.com/BlockSim/examples/rc4/index.htm Using RBDs for Modeling Failure Modes]<br />
*[[BlockSim_Analytical_Fault_Tree_and_RBD_Plot_Examples| Analytical Diagram Plots]]<br />
*[[BlockSim_Allocation_Analysis_Example| Allocation Analysis]]<br />
*[[Time-Dependent System Reliability for Components in Series]]<br />
*[[Time-Dependent System Reliability for Components in Parallel]]<br />
*[[Example_Using_a_Distribution_to_Approximate_the_CDF|Using a Distribution to Approximate the System cdf]]<br />
*[[Example Calculating System Reliability with Duty Cycles|Calculating System Reliability with Duty Cycles]]<br />
*[[Load Sharing Configuration Example|Load Sharing Configuration Example - Failure Modes Analysis]]<br />
*[[Standby Configuration Example|Standby Configuration Example - Car Tires]]<br />
*[[Reliability_Importance_Example|Reliability Importance Plot Example]]<br />
<br />
===Fault Tree Diagrams===<br />
*Simple Examples for Fault Tree Gates<br />
**[[AND Gate Example]]<br />
**[[OR Gate Example]]<br />
**[[Voting OR Gate Example]]<br />
**[[Example_Using_Load_Sharing_Gates_in_Fault_Trees|Load Sharing Gate Example]]<br />
**[[Standby Gate Example]]<br />
*[[Same Example Modeled with RBDs or Fault Trees|Same Example Modeled with RBDs or Fault Trees (Modeling Failure Modes)]]<br />
**[http://www.reliasoft.com/BlockSim/examples/rc6/index.htm Using Fault Trees for Modeling Failure Modes]<br />
<br />
===Simulation Diagrams===<br />
*[http://www.reliasoft.com/BlockSim/examples/rc5/index.htm RAM Analysis for Remote Telecommunications System]<br />
*[http://www.reliasoft.com/BlockSim/examples/rc3/index.htm Effect of Inspection Intervals]<br />
*[[BlockSim_Example:_CM_Triggered_by_Subsystem_Down|Corrective Maintenance Triggered by Failure of a Subsystem]]<br />
*[[Life Cycle Cost Analysis Example]]<br />
<br />
===Phase Diagrams===<br />
*[[BlockSim_Example:_Aircraft_Phases_with_Forced_Landing|Aircraft Phases with Forced Landing]]<br />
*[[BlockSim_Phase_Simulation_Plot_Examples|Oil Refinery Phase Diagram Simulation]]<br />
*[[Example Using Success Failure Paths in Phase Diagrams|Using Success/Failure Paths in Phase Diagrams]]<br />
*[[Example Using Subdiagram Phase Blocks|Using Subdiagram Phase Blocks]]<br />
*[[Interval Maintenance Threshold Example|Using an Interval Maintenance Threshold in a Maintenance Phase]]<br />
*[[Phase Throughput Examples]]<br />
<br />
===State Change Triggers (SCTs)===<br />
*[[Example Using SCT for Standby Rotation|Using State Change Triggers for Standby Rotation]]<br />
*[[Example_Using_SCT_to_Analyze_Tire_Maintenance|Using State Change Triggers to Analyze Tire Maintenance]]<br />
*[[Example_Using_SCT_to_Analyze_Standby_with_Delay|Using State Change Triggers to Model Standby with Delay]]<br />
*[[Example_Using_SCT_to_Model_Two_Standby_Blocks|Using State Change Triggers to Model Two Standby Devices]]<br />
*[[Example_Demonstrating_the_State_Upon_Repair_Option_for_SCT|SCT: The State Upon Repair Option]]<br />
*[[BlockSim_Example:_Default_OFF_unless_SCT_Overridden|Demonstrating the "Default OFF unless SCT overridden" option]]<br />
*[[BlockSim_Example:_Default_ON_unless_SCT_Overridden|Demonstrating the "Default ON unless SCT overridden" option]]<br />
<br />
===Throughput Analysis===<br />
*[[Simple_Throughput_Analysis_Example|Simple Throughput Analysis Example]]<br />
<br />
===Utilities===<br />
*[[Overlay_Plot_Example|Overlay Plots]]<br />
*[[Optimum Replacement Time Example]]<br />
*[[BlockSim_Analytical_FRED_Report_Example| Analytical FRED Report]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Examples&diff=65558Weibull++ Examples2020-10-12T20:30:43Z<p>Chuck Smith: </p>
<hr />
<div>{{Allexamplesindex}}<br />
<br />
==Reference Examples==<br />
*[[Weibull%2B%2B_Reference_Examples|Weibull++ Reference Examples]] (demonstrate how Weibull++ solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Parametric Life Data Analysis===<br />
*Simple Probability Plotting Examples: [[Probability Plotting Example|Simple Example]], [[3-Parameter_Weibull_Example|3P-Weibull Example]], [[1P_Exponential_Example|1P-Exponential]], [[Normal Distribution Probability Plotting Example|Normal]], [[Example:_Lognormal_Distribution_Probability_Plot|Lognormal]]<br />
*[[Standard_Folio_Plots|Standard Folio Plots]]<br />
* Competing Failure Modes Analysis<br />
:* Two Failure Modes Example. View it in '''[http://www.reliasoft.com/Weibull/examples/rc10/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_10.html Video]'''<br />
:* [[Complex_Failure_Modes_Example|Complex Failure Modes Example]]<br />
<br />
===Non-Parametric Life Data Analysis===<br />
*[[Weibull++ Non-Parametric LDA Plot Example|Kaplan-Meier Method]]<br />
*Simple Actuarial Method. View it in '''[http://www.reliasoft.com/Weibull/examples/rc7/index.htm HTML]'''<br />
<br />
===Degradation Data Analysis===<br />
* Crack Propagation Example (Point Estimation). View it in '''[http://www.reliasoft.com/Weibull/examples/rc4/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_4.html Video]'''<br />
<br />
===Recurrent Event Data Analysis===<br />
*[[Example:_Parametric_RDA_-_Air_Condition_Unit|Parametric RDA - Aircraft Air Condition Unit]]<br />
*[[Non_Parametric_RDA_MCF_Example|Non-Parametric RDA - Mean Cumulative Function (MCF) Example]]<br />
*Non-Parametric RDA - Manual Transmission. View it in '''[http://www.reliasoft.com/Weibull/examples/rc8/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_8.html Video]'''<br />
<br />
===Stress-Strength Analysis and Life Comparison===<br />
* [[Stress-Strength_Parameter_Uncertainty_Example|Stress-Strength Analysis Example]]<br />
* [[Stress-Strength_Analysis_in_Design_for_Reliability|Stress-Strength Analysis in DFR - Target Reliability Parameter Estimator]]<br />
* [[Life_Comparison_Examples|Life Comparison Examples: Using Contour Plots or Life Comparison Tool]]<br />
<br />
===Warranty Data Analysis===<br />
* [[Warranty_Data_Analysis_Dates_Format_Example|Dates of Failure Format Warranty Analysis]]<br />
* Nevada Chart Format Warranty Analysis. View it in '''[http://www.reliasoft.com/Weibull/examples/rc5/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_5.html Video]'''<br />
* [[Warranty_Analysis_Non-Homogeneous_Data_Example|Non-Homogeneous Data Warranty Analysis]]<br />
* [[Non-Homogeneous_Data_with_Subset_IDs_Example|Statistical Process Control Example]]<br />
* [[Warranty_Data_Analysis_Times-to-Failure_Format_with_Plot_Example|Times-to-Failure Format Warranty Analysis]]<br />
* [[Warranty_Analysis_Usage_Format_Example|Usage-Based Format Warranty Analysis]]<br />
<br />
===Test Design Examples===<br />
* Reliability Demonstration Test (RDT) Design<br />
**[[Parametric_Binomial_Example_-_Demonstrate_Reliability|Parametric Binomial - Test to Demonstrate Reliability]]<br />
** [[Parametric_Binomial_Example_-_Demonstrate_MTTF|Parametric Binomial - Test to Demonstrate MTTF]]<br />
** [[Non-Parametric Binomial Test Design Example|Non-Parametric Binomial Test Design]]<br />
** [[Exponential_Chi-Squared_Example|Exponential Chi-Squared Test Design]]<br />
** [[Non-Parametric Bayesian - Expert Opinion|Non-Parametric Bayesian with Prior Information from Expert Opinion]]<br />
** [[Non-Parametric Bayesian - Subsystem Tests|Non-Parametric Bayesian with Prior Information from Subsystem Tests]]<br />
* [[Expected Failure Times Plot Example]]<br />
* [[Difference Detection Matrix Example]]<br />
<br />
===Utilities===<br />
* [[Weibull%2B%2B_Equation_Fit_Solver_Example|Equation Fit Solver Example]]<br />
* Event Log Folio. View it in '''[http://www.reliasoft.com/Weibull/examples/rc6/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_6.html Video]'''<br />
<br />
* [[Maintenance Planning Example]]<br />
* [[Monte Carlo Simulation Example]]<br />
* [[Target Reliability Tool Example]]<br />
* [[Simulation_Based_Bounds_Example|SimuMatic Example]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=BlockSim_Examples&diff=65557BlockSim Examples2020-10-12T20:30:08Z<p>Chuck Smith: Remove old BlockSim logo</p>
<hr />
<div><br />
= [[BlockSim_Examples|BlockSim Examples]] = __NOTOC__<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
==Reference Examples==<br />
*[[BlockSim_Reference_Examples|BlockSim Reference Examples]] (demonstrate how BlockSim solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Analytical Diagrams and Optimization===<br />
* Reliability Analysis of a Storage Cluster System - View it in '''[http://www.reliasoft.com/BlockSim/examples/rc1/index.htm HTML]''' or '''[http://www.reliasoft.tv/blocksim/appexamples/blocksim_app_ex_1.html Video]'''<br />
* Optimized Reliability Allocation - View it in '''[http://www.reliasoft.com/BlockSim/examples/rc2/index.htm HTML]''' or '''[http://www.reliasoft.tv/blocksim/appexamples/blocksim_app_ex_2.html Video]'''<br />
*[http://www.reliasoft.com/BlockSim/examples/rc4/index.htm Using RBDs for Modeling Failure Modes]<br />
*[[BlockSim_Analytical_Fault_Tree_and_RBD_Plot_Examples| Analytical Diagram Plots]]<br />
*[[BlockSim_Allocation_Analysis_Example| Allocation Analysis]]<br />
*[[Time-Dependent System Reliability for Components in Series]]<br />
*[[Time-Dependent System Reliability for Components in Parallel]]<br />
*[[Example_Using_a_Distribution_to_Approximate_the_CDF|Using a Distribution to Approximate the System cdf]]<br />
*[[Example Calculating System Reliability with Duty Cycles|Calculating System Reliability with Duty Cycles]]<br />
*[[Load Sharing Configuration Example|Load Sharing Configuration Example - Failure Modes Analysis]]<br />
*[[Standby Configuration Example|Standby Configuration Example - Car Tires]]<br />
*[[Reliability_Importance_Example|Reliability Importance Plot Example]]<br />
<br />
===Fault Tree Diagrams===<br />
*Simple Examples for Fault Tree Gates<br />
**[[AND Gate Example]]<br />
**[[OR Gate Example]]<br />
**[[Voting OR Gate Example]]<br />
**[[Example_Using_Load_Sharing_Gates_in_Fault_Trees|Load Sharing Gate Example]]<br />
**[[Standby Gate Example]]<br />
*[[Same Example Modeled with RBDs or Fault Trees|Same Example Modeled with RBDs or Fault Trees (Modeling Failure Modes)]]<br />
**[http://www.reliasoft.com/BlockSim/examples/rc6/index.htm Using Fault Trees for Modeling Failure Modes]<br />
<br />
===Simulation Diagrams===<br />
*[http://www.reliasoft.com/BlockSim/examples/rc5/index.htm RAM Analysis for Remote Telecommunications System]<br />
*[http://www.reliasoft.com/BlockSim/examples/rc3/index.htm Effect of Inspection Intervals]<br />
*[[BlockSim_Example:_CM_Triggered_by_Subsystem_Down|Corrective Maintenance Triggered by Failure of a Subsystem]]<br />
*[[Life Cycle Cost Analysis Example]]<br />
<br />
===Phase Diagrams===<br />
*[[BlockSim_Example:_Aircraft_Phases_with_Forced_Landing|Aircraft Phases with Forced Landing]]<br />
*[[BlockSim_Phase_Simulation_Plot_Examples|Oil Refinery Phase Diagram Simulation]]<br />
*[[Example Using Success Failure Paths in Phase Diagrams|Using Success/Failure Paths in Phase Diagrams]]<br />
*[[Example Using Subdiagram Phase Blocks|Using Subdiagram Phase Blocks]]<br />
*[[Interval Maintenance Threshold Example|Using an Interval Maintenance Threshold in a Maintenance Phase]]<br />
*[[Phase Throughput Examples]]<br />
<br />
===State Change Triggers (SCTs)===<br />
*[[Example Using SCT for Standby Rotation|Using State Change Triggers for Standby Rotation]]<br />
*[[Example_Using_SCT_to_Analyze_Tire_Maintenance|Using State Change Triggers to Analyze Tire Maintenance]]<br />
*[[Example_Using_SCT_to_Analyze_Standby_with_Delay|Using State Change Triggers to Model Standby with Delay]]<br />
*[[Example_Using_SCT_to_Model_Two_Standby_Blocks|Using State Change Triggers to Model Two Standby Devices]]<br />
*[[Example_Demonstrating_the_State_Upon_Repair_Option_for_SCT|SCT: The State Upon Repair Option]]<br />
*[[BlockSim_Example:_Default_OFF_unless_SCT_Overridden|Demonstrating the "Default OFF unless SCT overridden" option]]<br />
*[[BlockSim_Example:_Default_ON_unless_SCT_Overridden|Demonstrating the "Default ON unless SCT overridden" option]]<br />
<br />
===Throughput Analysis===<br />
*[[Simple_Throughput_Analysis_Example|Simple Throughput Analysis Example]]<br />
<br />
===Utilities===<br />
*[[Overlay_Plot_Example|Overlay Plots]]<br />
*[[Optimum Replacement Time Example]]<br />
*[[BlockSim_Analytical_FRED_Report_Example| Analytical FRED Report]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Weibull%2B%2B_Examples&diff=65556Weibull++ Examples2020-10-12T20:28:55Z<p>Chuck Smith: Remove old Weibull++ logo</p>
<hr />
<div>{{Allexamplesindex}}<br />
<br />
= [[Weibull++_Examples|Weibull++ Examples]] = __NOTOC__<br />
<br />
==Reference Examples==<br />
*[[Weibull%2B%2B_Reference_Examples|Weibull++ Reference Examples]] (demonstrate how Weibull++ solves a variety of problems from published references)<br />
<br />
<br />
==Examples==<br />
===Parametric Life Data Analysis===<br />
*Simple Probability Plotting Examples: [[Probability Plotting Example|Simple Example]], [[3-Parameter_Weibull_Example|3P-Weibull Example]], [[1P_Exponential_Example|1P-Exponential]], [[Normal Distribution Probability Plotting Example|Normal]], [[Example:_Lognormal_Distribution_Probability_Plot|Lognormal]]<br />
*[[Standard_Folio_Plots|Standard Folio Plots]]<br />
* Competing Failure Modes Analysis<br />
:* Two Failure Modes Example. View it in '''[http://www.reliasoft.com/Weibull/examples/rc10/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_10.html Video]'''<br />
:* [[Complex_Failure_Modes_Example|Complex Failure Modes Example]]<br />
<br />
===Non-Parametric Life Data Analysis===<br />
*[[Weibull++ Non-Parametric LDA Plot Example|Kaplan-Meier Method]]<br />
*Simple Actuarial Method. View it in '''[http://www.reliasoft.com/Weibull/examples/rc7/index.htm HTML]'''<br />
<br />
===Degradation Data Analysis===<br />
* Crack Propagation Example (Point Estimation). View it in '''[http://www.reliasoft.com/Weibull/examples/rc4/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_4.html Video]'''<br />
<br />
===Recurrent Event Data Analysis===<br />
*[[Example:_Parametric_RDA_-_Air_Condition_Unit|Parametric RDA - Aircraft Air Condition Unit]]<br />
*[[Non_Parametric_RDA_MCF_Example|Non-Parametric RDA - Mean Cumulative Function (MCF) Example]]<br />
*Non-Parametric RDA - Manual Transmission. View it in '''[http://www.reliasoft.com/Weibull/examples/rc8/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_8.html Video]'''<br />
<br />
===Stress-Strength Analysis and Life Comparison===<br />
* [[Stress-Strength_Parameter_Uncertainty_Example|Stress-Strength Analysis Example]]<br />
* [[Stress-Strength_Analysis_in_Design_for_Reliability|Stress-Strength Analysis in DFR - Target Reliability Parameter Estimator]]<br />
* [[Life_Comparison_Examples|Life Comparison Examples: Using Contour Plots or Life Comparison Tool]]<br />
<br />
===Warranty Data Analysis===<br />
* [[Warranty_Data_Analysis_Dates_Format_Example|Dates of Failure Format Warranty Analysis]]<br />
* Nevada Chart Format Warranty Analysis. View it in '''[http://www.reliasoft.com/Weibull/examples/rc5/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_5.html Video]'''<br />
* [[Warranty_Analysis_Non-Homogeneous_Data_Example|Non-Homogeneous Data Warranty Analysis]]<br />
* [[Non-Homogeneous_Data_with_Subset_IDs_Example|Statistical Process Control Example]]<br />
* [[Warranty_Data_Analysis_Times-to-Failure_Format_with_Plot_Example|Times-to-Failure Format Warranty Analysis]]<br />
* [[Warranty_Analysis_Usage_Format_Example|Usage-Based Format Warranty Analysis]]<br />
<br />
===Test Design Examples===<br />
* Reliability Demonstration Test (RDT) Design<br />
**[[Parametric_Binomial_Example_-_Demonstrate_Reliability|Parametric Binomial - Test to Demonstrate Reliability]]<br />
** [[Parametric_Binomial_Example_-_Demonstrate_MTTF|Parametric Binomial - Test to Demonstrate MTTF]]<br />
** [[Non-Parametric Binomial Test Design Example|Non-Parametric Binomial Test Design]]<br />
** [[Exponential_Chi-Squared_Example|Exponential Chi-Squared Test Design]]<br />
** [[Non-Parametric Bayesian - Expert Opinion|Non-Parametric Bayesian with Prior Information from Expert Opinion]]<br />
** [[Non-Parametric Bayesian - Subsystem Tests|Non-Parametric Bayesian with Prior Information from Subsystem Tests]]<br />
* [[Expected Failure Times Plot Example]]<br />
* [[Difference Detection Matrix Example]]<br />
<br />
===Utilities===<br />
* [[Weibull%2B%2B_Equation_Fit_Solver_Example|Equation Fit Solver Example]]<br />
* Event Log Folio. View it in '''[http://www.reliasoft.com/Weibull/examples/rc6/index.htm HTML]''' or '''[http://www.reliasoft.tv/weibull/appexamples/weibull_app_ex_6.html Video]'''<br />
<br />
* [[Maintenance Planning Example]]<br />
* [[Monte Carlo Simulation Example]]<br />
* [[Target Reliability Tool Example]]<br />
* [[Simulation_Based_Bounds_Example|SimuMatic Example]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Main_Page&diff=65555Main Page2020-10-12T20:23:17Z<p>Chuck Smith: Removing old product names</p>
<hr />
<div>{{DISPLAYTITLE:ReliaWiki}} __NOTOC__ __NOEDITSECTION__ <br />
<div style="position:relative; float:left; display:block; width:100%; margin:10px;"><br />
ReliaWiki is owned and maintained by [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=p8TH7lvo8O%2FAHl72KXuPWlZmIR3P7GFj HBM Prenscia] and is an extension of [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=vQQfu%2FkoXaOAQyVYZnqaSTIPBB24C0kx weibull.com]. <!--For additional resources, visit [http://www.reliasoft.tv ReliaSoft.tv], [http://www.reliability-discussion.com/ Reliability Discussion Forum] and the [http://www.reliabilityprofessional.org/ Certified Reliability Professional (CRP) Program]. -->Due to continuous improvement to ReliaSoft software, the product images and step-by-step instructions featured on ReliaWiki may not reflect the current version.<br />
</div><br />
<br />
<div style="position:relative; float:left; width:100%;"><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=blue_triangle.png<br />
|title=Life Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Life_Data_Analysis_Reference_Book|text=Reference Book}}<br />
{{TitleBoxLink|link=Weibull++_Examples|text=Weibull++ Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=green_triangle.png<br />
|title=System Analysis (RBDs and Fault Trees)<br />
|links=<br />
{{TitleBoxLink|link=System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=BlockSim_Examples|text=BlockSim Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=red_triangle.png<br />
|title=Reliability Growth and Repairable System Analysis<br />
|links=<br />
{{TitleBoxLink|link=Reliability_Growth_and_Repairable_System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=RGA_Examples|text=Weibull++ Reliability Growth Module Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=rcm_triangle.png<br />
|title=Reliability Centered Maintenance (RCM)<br />
|links=<br />
{{TitleBoxLink|link=RCM%2B%2B_Examples|text=RCM++ Software Examples}}<br />
}}<br />
</div><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=yellow_triangle.png<br />
|title=Accelerated Life Testing Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Accelerated Life Testing Data Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=ALTA_Examples|text=Weibull++ Accelerated Life Testing Module Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=doe_triangle.png<br />
|title=Experiment Design and Analysis (DOE)<br />
|links=<br />
{{TitleBoxLink|link=Experiment_Design_and_Analysis_Reference|text=Reference Book}}<br />
<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=fmea_triangle.png<br />
|title=Failure Modes &amp; Effects Analysis (FMEA)<br />
|links=<br />
{{TitleBoxLink|link=FMEA_and_RCM_Articles|text=Articles}}<br />
{{TitleBoxLink|link=Xfmea_Examples|text=XFMEA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=api_triangle.png<br />
|title=ReliaSoft API<br />
|links=<br />
{{TitleBoxLink|link=ReliaSoft API Reference|text=API Reference}}<br />
{{TitleBoxLink|link=API_Changelog|text=API Changelog}}<br />
}}<br />
</div><br />
</div><br />
<div style="position:relative; float:left; width:100%;"><br />
<br><br><!--<cshow logged="1">This text will appear if a user with membership to 'sysop' group views this page</cshow> -->{{ReliaSoft Footer}}<br />
</div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=About&diff=65554About2020-08-19T23:10:25Z<p>Chuck Smith: </p>
<hr />
<div>__NOTOC__ __NOEDITSECTION__<br />
<br />
<br />
{{Template:Font|About ReliaWiki.org|15|tahoma|bold|gray}}<br />
<br />
<br />
ReliaWiki® provides a platform for sharing reliability engineering knowledge. ReliaWiki is a an extension of [http://www.weibull.com weibull.com] a web-based resource portal for professionals in reliability engineering and related fields. The weibull.com site's free reliability resources include online textbooks, reliability software and tools, discussion forums and numerous reference publications. The ReliaWiki project expands the resources available.<br />
<br />
<br />
Both [http://www.weibull.com weibull.com] and [http://www.reliawiki.org ReliaWiki.org] are services of [http://www.ReliaSoft.com ReliaSoft Corporation].<br />
<br />
<br />
<br><!--<cshow logged="1">This text will appear if a user with membership to 'sysop' group views this page</cshow>--><br />
<br />
{{Signup}}<br />
{{ReliaSoft Footer}}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Main_Page&diff=65550Main Page2020-08-19T22:53:51Z<p>Chuck Smith: </p>
<hr />
<div>{{DISPLAYTITLE:ReliaWiki}} __NOTOC__ __NOEDITSECTION__ <br />
<div style="position:relative; float:left; display:block; width:100%; margin:10px;"><br />
ReliaWiki is owned and maintained by [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=p8TH7lvo8O%2FAHl72KXuPWlZmIR3P7GFj HBM Prenscia] and is an extension of [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=vQQfu%2FkoXaOAQyVYZnqaSTIPBB24C0kx weibull.com]. <!--For additional resources, visit [http://www.reliasoft.tv ReliaSoft.tv], [http://www.reliability-discussion.com/ Reliability Discussion Forum] and the [http://www.reliabilityprofessional.org/ Certified Reliability Professional (CRP) Program]. -->Due to continuous improvement to ReliaSoft software, the product images and step-by-step instructions featured on ReliaWiki may not reflect the current version.<br />
</div><br />
<br />
<div style="position:relative; float:left; width:100%;"><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=blue_triangle.png<br />
|title=Life Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Life_Data_Analysis_Reference_Book|text=Reference Book}}<br />
{{TitleBoxLink|link=Weibull++_Examples|text=Weibull++ Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=green_triangle.png<br />
|title=System Analysis (RBDs and Fault Trees)<br />
|links=<br />
{{TitleBoxLink|link=System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=BlockSim_Examples|text=BlockSim Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=red_triangle.png<br />
|title=Reliability Growth and Repairable System Analysis<br />
|links=<br />
{{TitleBoxLink|link=Reliability_Growth_and_Repairable_System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=RGA_Examples|text=RGA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=rcm_triangle.png<br />
|title=Reliability Centered Maintenance (RCM)<br />
|links=<br />
{{TitleBoxLink|link=RCM%2B%2B_Examples|text=RCM++ Software Examples}}<br />
}}<br />
</div><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=yellow_triangle.png<br />
|title=Accelerated Life Testing Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Accelerated Life Testing Data Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=ALTA_Examples|text=ALTA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=doe_triangle.png<br />
|title=Experiment Design and Analysis (DOE)<br />
|links=<br />
{{TitleBoxLink|link=Experiment_Design_and_Analysis_Reference|text=Reference Book}}<br />
<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=fmea_triangle.png<br />
|title=Failure Modes &amp; Effects Analysis (FMEA)<br />
|links=<br />
{{TitleBoxLink|link=FMEA_and_RCM_Articles|text=Articles}}<br />
{{TitleBoxLink|link=Xfmea_Examples|text=Xfmea Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=api_triangle.png<br />
|title=ReliaSoft API<br />
|links=<br />
{{TitleBoxLink|link=ReliaSoft API Reference|text=API Reference}}<br />
{{TitleBoxLink|link=API_Changelog|text=API Changelog}}<br />
}}<br />
</div><br />
</div><br />
<div style="position:relative; float:left; width:100%;"><br />
<br><br><!--<cshow logged="1">This text will appear if a user with membership to 'sysop' group views this page</cshow> -->{{ReliaSoft Footer}}<br />
</div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Main_Page&diff=65541Main Page2019-12-11T21:34:31Z<p>Chuck Smith: </p>
<hr />
<div>{{DISPLAYTITLE:ReliaWiki}} __NOTOC__ __NOEDITSECTION__ <br />
<div style="position:relative; float:left; display:block; width:100%; margin:10px;"><br />
ReliaWiki is owned and maintained by [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=p8TH7lvo8O%2FAHl72KXuPWlZmIR3P7GFj HBM Prenscia] and is an extension of [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=vQQfu%2FkoXaOAQyVYZnqaSTIPBB24C0kx weibull.com]. <!--For additional resources, visit [http://www.reliasoft.tv ReliaSoft.tv], [http://www.reliability-discussion.com/ Reliability Discussion Forum] and the [http://www.reliabilityprofessional.org/ Certified Reliability Professional (CRP) Program]. -->Due to continuous improvement to ReliaSoft software, the product images and step-by-step instructions featured on ReliaWiki may not reflect the current version.<br />
</div><br />
<br />
<div style="position:relative; float:left; width:100%;"><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=blue_triangle.png<br />
|title=Life Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Life_Data_Analysis_Reference_Book|text=Reference Book}}<br />
{{TitleBoxLink|link=Weibull++_Examples|text=Weibull++ Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=green_triangle.png<br />
|title=System Analysis (RBDs and Fault Trees)<br />
|links=<br />
{{TitleBoxLink|link=System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=BlockSim_Examples|text=BlockSim Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=red_triangle.png<br />
|title=Reliability Growth and Repairable System Analysis<br />
|links=<br />
{{TitleBoxLink|link=Reliability_Growth_and_Repairable_System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=RGA_Examples|text=RGA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=rcm_triangle.png<br />
|title=Reliability Centered Maintenance (RCM)<br />
|links=<br />
{{TitleBoxLink|link=RCM%2B%2B_Examples|text=RCM++ Software Examples}}<br />
}}<br />
</div><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=yellow_triangle.png<br />
|title=Accelerated Life Testing Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Accelerated Life Testing Data Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=ALTA_Examples|text=ALTA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=doe_triangle.png<br />
|title=Experiment Design and Analysis (DOE)<br />
|links=<br />
{{TitleBoxLink|link=Experiment_Design_and_Analysis_Reference|text=Reference Book}}<br />
<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=fmea_triangle.png<br />
|title=Failure Modes &amp; Effects Analysis (FMEA)<br />
|links=<br />
{{TitleBoxLink|link=FMEA_and_RCM_Articles|text=Articles}}<br />
{{TitleBoxLink|link=Xfmea_Examples|text=Xfmea Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=api_triangle.png<br />
|title=ReliaSoft API<br />
|links=<br />
{{TitleBoxLink|link=ReliaSoft API Reference|text=API Reference}}<br />
{{TitleBoxLink|link=API_Changelog|text=API Changelog}}<br />
}}<br />
</div><br />
</div><br />
<div style="position:relative; float:left; width:100%;"><br />
<br><br><cshow logged="1">This text will appear if a user with membership to 'sysop' group views this page</cshow> {{ReliaSoft Footer}}<br />
</div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=ReliaWiki:General_disclaimer&diff=65539ReliaWiki:General disclaimer2019-11-22T22:10:31Z<p>Chuck Smith: </p>
<hr />
<div>ReliaWiki.org does not endorse or make any representations about the companies, products or services mentioned in third-party announcements. They are provided solely as a convenience to the reliability community. <br />
<br />
<br />
ALL MATERIALS PROVIDED ON THIS SITE ARE PROVIDED "AS IS" WITHOUT ANY WARRANTIES OF ANY KIND INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OF INTELLECTUAL PROPERTY. <br />
<br />
<br />
HBM Prenscia does not and cannot warrant the accuracy and completeness of the materials at this site. The materials at this site may be out of date and HBM Prenscia makes no commitment to update the materials at this site. <br />
<br />
<br />
IN NO EVENT WILL HBM PRENSCIA, ITS SUPPLIERS, OR OTHER THIRD PARTIES MENTIONED AT THIS SITE BE LIABLE FOR ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, THOSE RESULTING FROM LOST PROFITS, LOST DATA OR BUSINESS INTERRUPTION) ARISING OUT OF THE USE, INABILITY TO USE, OR THE RESULTS OF USE OF THIS SITE, ANY WEB SITES LINKED TO THIS SITE, OR THE MATERIALS OR INFORMATION CONTAINED AT ANY OR ALL SUCH SITES, WHETHER BASED ON WARRANTY, CONTRACT, TORT OR ANY OTHER LEGAL THEORY AND WHETHER OR NOT ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IF YOUR USE OF THE MATERIALS OR INFORMATION FROM THIS SITE RESULTS IN THE NEED FOR SERVICING, REPAIR OR CORRECTION OF EQUIPMENT OR DATA, YOU ASSUME ALL COSTS THEREOF. APPLICABLE LAW MAY NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO YOU.<br />
<br />
{{ReliaSoft Footer}}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:ReliaSoft_Footer&diff=65538Template:ReliaSoft Footer2019-11-22T22:09:40Z<p>Chuck Smith: </p>
<hr />
<div>[[Image: weibullcom_logo.png|Type|Border|Location|right|size|link=http://www.Weibull.com|alt="Go to Weibull.com"|Caption=Weibull]]<br />
[[Image: HBMPrenscia_Logo.png|Type|Border|Location|left|300px|link=https://www.hbmprenscia.com|alt="Go to HBM Prenscia's Home Page"|Caption]]<br />
{| border="0" cellspacing="0" cellpadding="2" width="100%"<br />
|-<br />
| style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" |[[Image:ccsmall.png|link=Copyright Information|align="Center"]] <br />
| style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
Content on this site is available/licensed under the "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License." See [[Copyright Information]] for details.<br />
|}<br />
<noinclude>[[Category: Banners]]</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=File:HBMPrenscia_Logo.png&diff=65537File:HBMPrenscia Logo.png2019-11-22T22:08:04Z<p>Chuck Smith: </p>
<hr />
<div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=ReliaWiki:Privacy_Policy&diff=65536ReliaWiki:Privacy Policy2019-11-22T22:04:22Z<p>Chuck Smith: </p>
<hr />
<div>= Privacy Policy =<br />
<br />
ReliaWiki® is owned and maintained by [https://www.hbmprenscia.com HBM Prenscia]. <br />
<br />
HBM Prenscia’s master privacy policy, which covers all sites owned, operated and maintained by HBM Prenscia, including this site, can be found at: [https://www.hbmprenscia.com/third-party-privacy-notice https://www.hbmprenscia.com/third-party-privacy-notice] <br />
<br />
<br> <br />
<br />
{{Template:ReliaSoft Footer}}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Simple_Linear_Regression_Analysis&diff=65381Simple Linear Regression Analysis2018-08-09T22:51:30Z<p>Chuck Smith: </p>
<hr />
<div>{{Template:Doebook|3}}<br />
Regression analysis is a statistical technique that attempts to explore and model the relationship between two or more variables. For example, an analyst may want to know if there is a relationship between road accidents and the age of the driver. Regression analysis forms an important part of the statistical analysis of the data obtained from designed experiments and is discussed briefly in this chapter. Every experiment analyzed in a [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++] DOE foilo includes regression results for each of the responses. These results, along with the results from the analysis of variance (explained in the [[One Factor Designs]] and [[General Full Factorial Designs]] chapters), provide information that is useful to identify significant factors in an experiment and explore the nature of the relationship between these factors and the response. Regression analysis forms the basis for all [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++] DOE folio calculations related to the sum of squares used in the analysis of variance. The reason for this is explained in [[Use_of_Regression_to_Calculate_Sum_of_Squares|Appendix B]]. Additionally, DOE folios also include a regression tool to see if two or more variables are related, and to explore the nature of the relationship between them. <br />
<br />
This chapter discusses simple linear regression analysis while a [[Multiple_Linear_Regression_Analysis|subsequent chapter]] focuses on multiple linear regression analysis.<br />
<br />
==Simple Linear Regression Analysis== <br />
A linear regression model attempts to explain the relationship between two or more variables using a straight line. Consider the data obtained from a chemical process where the yield of the process is thought to be related to the reaction temperature (see the table below).<br />
<br />
<br />
[[Image:doet4.1.png|center|343px|Yield data observations of a chemical process at different values of reaction temperature.|link=]]<br />
<br />
<br />
This data can be entered in the DOE folio as shown in the following figure:<br />
<br />
<br />
[[Image:doe4_1.png|center|530px|Data entry in the DOE folio for the observations.|link=]]<br />
<br />
<br />
And a scatter plot can be obtained as shown in the following figure. In the scatter plot yield, <math>y_i\,\!</math> is plotted for different temperature values, <math>x_i\,\!</math>.<br />
<br />
<br />
[[Image:doe4_2.png|center|650px|Scatter plot for the data.|link=]]<br />
<br />
<br />
It is clear that no line can be found to pass through all points of the plot. Thus no functional relation exists between the two variables <math>x\,\!</math> and <math>Y\,\!</math>. However, the scatter plot does give an indication that a straight line may exist such that all the points on the plot are scattered randomly around this line. A statistical relation is said to exist in this case. The statistical relation between <math>x\,\!</math> and <math>Y\,\!</math> may be expressed as follows:<br />
<br />
<br />
::<math>Y=\beta_0+\beta_1{x}+\epsilon\,\!</math><br />
<br />
<br />
The above equation is the linear regression model that can be used to explain the relation between <math>x\,\!</math> and <math>Y\,\!</math> that is seen on the scatter plot above. In this model, the mean value of <math>Y\,\!</math> (abbreviated as <math>E(Y)\,\!</math>) is assumed to follow the linear relation:<br />
<br />
<br />
::<math>E(Y) = \beta_0+\beta_1{x}\,\!</math><br />
<br />
<br />
The actual values of <math>Y\,\!</math> (which are observed as yield from the chemical process from time to time and are random in nature) are assumed to be the sum of the mean value, <math>E(Y)\,\!</math>, and a random error term, <math>\epsilon\,\!</math>:<br />
<br />
<br />
::<math>\begin{align}Y = & E(Y)+\epsilon \\ <br />
= & \beta_0+\beta_1{x}+\epsilon\end{align}\,\!</math><br />
<br />
<br />
The regression model here is called a ''simple'' linear regression model because there is just one independent variable, <math>x\,\!</math>, in the model. In regression models, the independent variables are also referred to as regressors or predictor variables. The dependent variable, <math>Y\,\!</math> , is also referred to as the response. The slope, <math>\beta_1\,\!</math>, and the intercept, <math>\beta_0\,\!</math> , of the line <math>E(Y)=\beta_0+\beta_1{x}\,\!</math> are called ''regression coefficients''. The slope, <math>\beta_1\,\!</math>, can be interpreted as the change in the mean value of <math>Y\,\!</math> for a unit change in <math>x\,\!</math>.<br />
<br />
The random error term, <math>\epsilon\,\!</math>, is assumed to follow the normal distribution with a mean of 0 and variance of <math>\sigma^2\,\!</math>. Since <math>Y\,\!</math> is the sum of this random term and the mean value, <math>E(Y)\,\!</math>, which is a constant, the variance of <math>Y\,\!</math> at any given value of <math>x\,\!</math> is also <math>\sigma^2\,\!</math>. Therefore, at any given value of <math>x\,\!</math>, say <math>x_i\,\!</math>, the dependent variable <math>Y\,\!</math> follows a normal distribution with a mean of <math>\beta_0+\beta_1{x_i}\,\!</math> and a standard deviation of <math>\sigma\,\!</math>. This is illustrated in the following figure.<br />
<br />
[[Image:doe4.3.png|center|583px|The normal distribution of <math>Y\,\!</math> for two values of <math>x\,\!</math>. Also shown is the true regression line and the values of the random error term, <math>\epsilon\,\!</math>, corresponding to the two <math>x\,\!</math> values. The true regression line and <math>\epsilon\,\!</math> are usually not known.|link=]]<br />
<br />
===Fitted Regression Line===<br />
The true regression line is usually not known. However, the regression line can be estimated by estimating the coefficients <math>\beta_1\,\!</math> and <math>\beta_0\,\!</math> for an observed data set. The estimates, <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math>, are calculated using least squares. (For details on least square estimates, refer to [[Appendix:_Life_Data_Analysis_References|Hahn & Shapiro (1967)]].) The estimated regression line, obtained using the values of <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math>, is called the ''fitted line''. The least square estimates, <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math>, are obtained using the following equations:<br />
<br />
<br />
::<math>\hat{\beta}_1 = \frac{\sum_{i=1}^n y_i x_i- \frac{(\sum_{i=1}^n y_i) (\sum_{i=1}^n x_i)}{n}}{\sum_{i=1}^n (x_i-\bar{x})^2}\,\!</math><br />
::<math>\hat{\beta}_0=\bar{y}-\hat{\beta}_1 \bar{x}\,\!</math><br />
<br />
<br />
where <math>\bar{y}\,\!</math> is the mean of all the observed values and <math>\bar{x}\,\!</math> is the mean of all values of the predictor variable at which the observations were taken. <math>\bar{y}\,\!</math> is calculated using <math>\bar{y}=(1/n)\sum)_{i=1}^n y_i\,\!</math> and <math>\bar{x}\,\!</math> is calculated using <math>\bar{x}=(1/n)\sum)_{i=1}^n x_i\,\!</math>.<br />
<br />
<br />
Once <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math> are known, the fitted regression line can be written as:<br />
<br />
<br />
::<math>\hat{y}=\hat{\beta}_0+\hat{\beta}_1 x\,\!</math><br />
<br />
<br />
where <math>\hat{y}\,\!</math> is the fitted or estimated value based on the fitted regression model. It is an estimate of the mean value, <math>E(Y)\,\!</math>. The fitted value,<math>\hat{y}_i\,\!</math>, for a given value of the predictor variable, <math>x_i\,\!</math>, may be different from the corresponding observed value, <math>y_i\,\!</math>. The difference between the two values is called the ''residual'', <math>e_i\,\!</math>:<br />
<br />
<br />
::<math>e_i=y_i-\hat{y}_i\,\!</math><br />
<br />
<br />
====Calculation of the Fitted Line Using Least Square Estimates====<br />
The least square estimates of the regression coefficients can be obtained for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] as follows:<br />
<br />
<br />
::<math>\begin{align}\hat{\beta}_1 = & \frac{\sum_{i=1}^n y_i x_i- \frac{(\sum_{i=1}^n y_i) (\sum_{i=1}^n x_i)}{n}}{\sum_{i=1}^n (x_i-\bar{x})^2} \\<br />
= & \frac{322516-\frac{4158 x 1871}{25}}{5697.36} \\<br />
= & 1.9952 \approx 2.00\end{align}\,\!</math><br />
<br />
<br />
::<math>\begin{align}\hat{\beta}_0 = & \bar{y}-\hat{\beta}_1 \bar{x} \\<br />
= & 166.32 - 2 x 74.84 \\<br />
= & 17.0016 \approx 17.00\end{align}\,\!</math><br />
<br />
<br />
Knowing <math>\hat{\beta}_0\,\!</math> and <math>\hat{\beta}_1\,\!</math>, the fitted regression line is:<br />
<br />
<br />
::<math>\begin{align}\hat{y} = & \hat{\beta}_0 + \hat{\beta}_1 x \\<br />
= & 17.0016 + 1.9952 \times x \\<br />
\approx & 17 + 2{x}\end{align}\,\!</math><br />
<br />
<br />
This line is shown in the figure below.<br />
<br />
<br />
[[Image:doe4.4.png|center|637px|Fitted regression line for the data. Also shown is the residual for the 21st observation.|link=]]<br />
<br />
<br />
Once the fitted regression line is known, the fitted value of <math>Y\,\!</math> corresponding to any observed data point can be calculated. For example, the fitted value corresponding to the 21st observation in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] is:<br />
<br />
<br />
::<math>\begin{align}\hat{y}_{21}= & \hat{\beta}_0 + \hat{\beta}_1 x_{21} \\<br />
= & (17.0016) + (1.9952) \times 93 \\<br />
= & 202.6\end{align}\,\!</math><br />
<br />
<br />
The observed response at this point is <math>y_{21}=194\,\!</math>. Therefore, the residual at this point is:<br />
<br />
<br />
::<math>\begin{align}e_{21} = & y_{21}-\hat{y}_{21} \\<br />
= & 194-202.6 \\<br />
= & -8.6\end{align}\,\!</math><br />
<br />
<br />
In DOE folios, fitted values and residuals can be calculated. The values are shown in the figure below.<br />
<br />
<br />
[[Image:doe4_5.png|center|880px|Fitted values and residuals for the data.|link=]]<br />
<br />
==Hypothesis Tests in Simple Linear Regression==<br />
<br />
The following sections discuss hypothesis tests on the regression coefficients in simple linear regression. These tests can be carried out if it can be assumed that the random error term, <math>\epsilon\,\!</math>, is normally and independently distributed with a mean of zero and variance of <math>\sigma^2\,\!</math>. <br />
<br />
===t Tests===<br />
<br />
The <math>t\,\!</math> tests are used to conduct hypothesis tests on the regression coefficients obtained in simple linear regression. A statistic based on the <math>t\,\!</math> distribution is used to test the two-sided hypothesis that the true slope, <math>\beta_1\,\!</math>, equals some constant value, <math>\beta_{1,0}\,\!</math>. The statements for the hypothesis test are expressed as:<br />
<br />
<br />
::<math>\begin{align}H_0 & : & \beta_1=\beta_{1,0} \\<br />
H_1 & : & \beta_{1}\ne\beta_{1,0}\end{align}\,\!</math><br />
<br />
<br />
The test statistic used for this test is:<br />
<br />
<br />
::<math>T_0=\frac{\hat{\beta}_1-\beta_{1,0}}{se(\hat{\beta}_1)}\,\!</math><br />
<br />
<br />
where <math>\hat{\beta}_1\,\!</math> is the least square estimate of <math>\beta_1\,\!</math>, and <math>se(\hat{\beta}_1)\,\!</math> is its standard error. The value of <math>se(\hat{\beta}_1)\,\!</math> can be calculated as follows:<br />
<br />
<br />
:<math>se(\hat{\beta}_1)= \sqrt{\frac{\frac{\displaystyle \sum_{i=1}^n e_i^2}{n-2}}{\displaystyle \sum_{i=1}^n (x_i-\bar{x})^2}}\,\!</math><br />
<br />
<br />
The test statistic, <math>T_0\,\!</math> , follows a <math>t\,\!</math> distribution with <math>(n-2)\,\!</math> degrees of freedom, where <math>n\,\!</math> is the total number of observations. The null hypothesis, <math>H_0\,\!</math>, is accepted if the calculated value of the test statistic is such that:<br />
<br />
<br />
::<math>-t_{\alpha/2,n-2}<T_0<t_{\alpha/2,n-2}\,\!</math><br />
<br />
<br />
where <math>t_{\alpha/2,n-2}\,\!</math> and <math>-t_{\alpha/2,n-2}\,\!</math> are the critical values for the two-sided hypothesis. <math>t_{\alpha/2,n-2}\,\!</math> is the percentile of the <math>t\,\!</math> distribution corresponding to a cumulative probability of <math>(1-\alpha/2)\,\!</math> and <math>\alpha\,\!</math> is the significance level. <br />
<br />
If the value of <math>\beta_{1,0}\,\!</math> used is zero, then the hypothesis tests for the significance of regression. In other words, the test indicates if the fitted regression model is of value in explaining variations in the observations or if you are trying to impose a regression model when no true relationship exists between <math>x\,\!</math> and <math>Y\,\!</math>. Failure to reject <math>H_0:\beta_1=0\,\!</math> implies that no linear relationship exists between <math>x\,\!</math> and <math>Y\,\!</math>. This result may be obtained when the scatter plots of against are as shown in (a) of the following figure and (b) of the following figure. (a) represents the case where no model exits for the observed data. In this case you would be trying to fit a regression model to noise or random variation. (b) represents the case where the true relationship between <math>x\,\!</math> and <math>Y\,\!</math> is not linear. (c) and (d) represent the case when <math>H_0:\beta_1=0\,\!</math> is rejected, implying that a model does exist between <math>x\,\!</math> and <math>Y\,\!</math>. (c) represents the case where the linear model is sufficient. In the following figure, (d) represents the case where a higher order model may be needed.<br />
<br />
[[Image:doe4.6.png|center|500px|Possible scatter plots of <math>y\,\!</math> against <math>x\,\!</math>. Plots (a) and (b) represent cases when <math>H_0:\beta_1=0\,\!</math> is not rejected. Plots (c) and (d) represent cases when <math>H_0:\beta_1=0\,\!</math> is rejected.|link=]]<br />
<br />
<br />
A similar procedure can be used to test the hypothesis on the intercept. The test statistic used in this case is:<br />
<br />
<br />
::<math>T_0=\frac{\hat{\beta}_0-\beta_{0,0}}{se(\hat{\beta}_0)}\,\!</math><br />
<br />
<br />
where <math>\hat{\beta}_0\,\!</math> is the least square estimate of <math>\beta_0\,\!</math>, and <math>se(\hat{\beta}_0)\,\!</math> is its standard error which is calculated using:<br />
<br />
<br />
:<math>se(\hat{\beta}_0)= \sqrt{\frac{\displaystyle\sum_{i=1}^n e_i^2}{n-2} \Bigg[ \frac{1}{n}+\frac{\bar{x}^2}{\displaystyle\sum_{i=1}^n (x_i-\bar{x})^2} \Bigg]}\,\!</math><br />
<br />
<br />
'''Example'''<br />
<br />
<br />
The test for the significance of regression for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] is illustrated in this example. The test is carried out using the <math>t\,\!</math> test on the coefficient <math>\beta_1\,\!</math>. The hypothesis to be tested is <math>H_0 : \beta_1 = 0\,\!</math>. To calculate the statistic to test <math>H_0\,\!</math>, the estimate, <math>\hat{\beta}_1\,\!</math>, and the standard error, <math>se(\hat{\beta}_1)\,\!</math>, are needed. The value of <math>\hat{\beta}_1\,\!</math> was obtained in [[Simple_Linear_Regression_Analysis#Fitted_Regression_Line|this section]]. The standard error can be calculated as follows:<br />
<br />
<br />
:<math>\begin{align}se(\hat{\beta}_1) & = \sqrt{\frac{\frac{\displaystyle \sum_{i=1}^n e_i^2}{n-2}}{\displaystyle \sum_{i=1}^n (x_i-\bar{x})^2}} \\<br />
= & \sqrt{\frac{(371.627/23)}{5679.36}} \\<br />
= & 0.0533\end{align}\,\!</math><br />
<br />
<br />
Then, the test statistic can be calculated using the following equation:<br />
<br />
<br />
::<math>\begin{align}t_0 & = & \frac{\hat{\beta}_1-\beta_{1,0}}{se(\hat{\beta}_0)} <br />
= & \frac{1.9952-0}{0.0533}<br />
= & 37.4058\end{align}\,\!</math><br />
<br />
<br />
The <math>p\,\!</math> value corresponding to this statistic based on the <math>t\,\!</math> distribution with 23 (n-2 = 25-2 = 23) degrees of freedom can be obtained as follows:<br />
<br />
<br />
::<math>\begin{align}p value = & 2\times (1-P(T\le t_0) \\<br />
= & 2 \times (1-0.999999) \\<br />
= & 0\end{align}\,\!</math><br />
<br />
<br />
Assuming that the desired significance level is 0.1, since <math>p\,\!</math> value < 0.1, <math>H_0 : \beta_1=0\,\!</math> is rejected indicating that a relation exists between temperature and yield for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]]. Using this result along with the scatter plot, it can be concluded that the relationship between temperature and yield is linear.<br />
<br />
In Weibull++ DOE folios, information related to the <math>t\,\!</math> test is displayed in the Regression Information table as shown in the following figure. In this table the <math>t\,\!</math> test for <math>\beta_1\,\!</math> is displayed in the row for the term Temperature because <math>\beta_1\,\!</math> is the coefficient that represents the variable temperature in the regression model. The columns labeled Standard Error, T Value and P Value represent the standard error, the test statistic for the test and the <math>p\,\!</math> value for the <math>t\,\!</math> test, respectively. These values have been calculated for <math>\beta_1\,\!</math> in this example. The Coefficient column represents the estimate of regression coefficients. The Effect column represents values obtained by multiplying the coefficients by a factor of 2. This value is useful in the case of two factor experiments and is explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]. Columns Low Confidence and High Confidence represent the limits of the confidence intervals for the regression coefficients and are explained in [[Simple_Linear_Regression_Analysis#Confidence_Interval_on_Regression_Coefficients|Confidence Interval on Regression Coefficients]].<br />
<br />
<br />
[[Image:doe4_7.png|center|826px|Regression results for the data.|link=]]<br />
<br />
===Analysis of Variance Approach to Test the Significance of Regression===<br />
<br />
The analysis of variance (ANOVA) is another method to test for the significance of regression. As the name implies, this approach uses the variance of the observed data to determine if a regression model can be applied to the observed data. The observed variance is partitioned into components that are then used in the test for significance of regression.<br />
<br />
====Sum of Squares====<br />
<br />
The total variance (i.e., the variance of all of the observed data) is estimated using the observed data. As mentioned in [[Statistical_Background_on_DOE| Statistical Background]], the variance of a population can be estimated using the sample variance, which is calculated using the following relationship:<br />
<br />
<br />
::<math>{{s}^{2}}=\frac{\underset{i=1}{\overset{n}{\mathop{\sum }}}\,{{({{y}_{i}}-\bar{y})}^{2}}}{n-1}\,\!</math><br />
<br />
<br />
The quantity in the numerator of the previous equation is called the ''sum of squares''. It is the sum of the square of deviations of all the observations, <math>{{y}_{i}}\,\!</math>, from their mean, <math>\bar{y}\,\!</math>. In the context of ANOVA this quantity is called the ''total sum of squares'' (abbreviated <math>S{{S}_{T}}\,\!</math>) because it relates to the total variance of the observations. Thus:<br />
<br />
<br />
::<math>S{{S}_{T}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}}\,\!</math><br />
<br />
<br />
The denominator in the relationship of the sample variance is the number of degrees of freedom associated with the sample variance. Therefore, the number of degrees of freedom associated with <math>S{{S}_{T}}\,\!</math>, <math>dof(S{{S}_{T}})\,\!</math>, is <math>n-1\,\!</math>. The sample variance is also referred to as a ''mean square'' because it is obtained by dividing the sum of squares by the respective degrees of freedom. Therefore, the total mean square (abbreviated <math>M{{S}_{T}}\,\!</math>) is:<br />
<br />
<br />
::<math>M{{S}_{T}}=\frac{S{{S}_{T}}}{dof(S{{S}_{T}})}=\frac{S{{S}_{T}}}{n-1}\,\!</math><br />
<br />
<br />
When you attempt to fit a regression model to the observations, you are trying to explain some of the variation of the observations using this model. If the regression model is such that the resulting fitted regression line passes through all of the observations, then you would have a "perfect" model (see (a) of the figure below). In this case the model would explain all of the variability of the observations. Therefore, the model sum of squares (also referred to as the regression sum of squares and abbreviated <math>S{{S}_{R}}\,\!</math>) equals the total sum of squares; i.e., the model explains all of the observed variance:<br />
<br />
<br />
::<math>S{{S}_{R}}=S{{S}_{T}}\,\!</math><br />
<br />
<br />
For the perfect model, the regression sum of squares, <math>S{{S}_{R}}\,\!</math>, equals the total sum of squares, <math>S{{S}_{T}}\,\!</math>, because all estimated values, <math>{{\hat{y}}_{i}}\,\!</math>, will equal the corresponding observations, <math>{{y}_{i}}\,\!</math>. <math>S{{S}_{R}}\,\!</math> can be calculated using a relationship similar to the one for obtaining <math>S{{S}_{T}}\,\!</math> by replacing <math>{{y}_{i}}\,\!</math> by <math>{{\hat{y}}_{i}}\,\!</math> in the relationship of <math>S{{S}_{T}}\,\!</math>. Therefore:<br />
<br />
<br />
::<math>S{{S}_{R}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{\hat{y}}_{i}}-\bar{y})}^{2}}\,\!</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{R}}\,\!</math> is 1. <br />
<br />
<br />
Based on the preceding discussion of ANOVA, a perfect regression model exists when the fitted regression line passes through all observed points. However, this is not usually the case, as seen in (b) of the following figure. <br />
<br />
<br />
[[Image:doe4.8.png|center|300px|A perfect regression model will pass through all observed data points as shown in (a). Most models are imperfect and do not fit perfectly to all data points as shown in (b).|link=]]<br />
<br />
<br />
In both of these plots, a number of points do not follow the fitted regression line. This indicates that a part of the total variability of the observed data still remains unexplained. This portion of the total variability or the total sum of squares, that is not explained by the model, is called the ''residual sum of squares'' or the ''error sum of squares'' (abbreviated <math>S{{S}_{E}}\,\!</math>). The deviation for this sum of squares is obtained at each observation in the form of the residuals, <math>{{e}_{i}}\,\!</math>. The error sum of squares can be obtained as the sum of squares of these deviations:<br />
<br />
<br />
::<math>S{{S}_{E}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,e_{i}^{2}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-{{\hat{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{E}}\,\!</math>, <math>dof(S{{S}_{E}})\,\!</math>, is <math>(n-2)\,\!</math>. <br />
The total variability of the observed data (i.e., total sum of squares, <math>S{{S}_{T}}\,\!</math>) can be written using the portion of the variability explained by the model, <math>S{{S}_{R}}\,\!</math>, and the portion unexplained by the model, <math>S{{S}_{E}}\,\!</math>, as:<br />
<br />
<br />
::<math>S{{S}_{T}}=S{{S}_{R}}+S{{S}_{E}}\,\!</math><br />
<br />
<br />
The above equation is also referred to as the analysis of variance identity and can be expanded as follows:<br />
<br />
<br />
::<math>\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{\hat{y}}_{i}}-\bar{y})}^{2}}+\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-{{\hat{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
[[Image:doe4.9.png|center|600px|Scatter plots showing the deviations for the sum of squares used in ANOVA. (a) shows deviations for <math>S S_{T}\,\!</math>, (b) shows deviations for <math>S S_{R}\,\!</math>, and (c) shows deviations for <math>S S_{E}\,\!</math>.|link=]]<br />
<br />
====Mean Squares====<br />
<br />
As mentioned previously, mean squares are obtained by dividing the sum of squares by the respective degrees of freedom. For example, the error mean square, <math>M{{S}_{E}}\,\!</math>, can be obtained as:<br />
<br />
<br />
::<math>M{{S}_{E}}=\frac{S{{S}_{E}}}{dof(S{{S}_{E}})}=\frac{S{{S}_{E}}}{n-2}\,\!</math><br />
<br />
<br />
The error mean square is an estimate of the variance, <math>{{\sigma }^{2}}\,\!</math>, of the random error term, <math>\epsilon\,\!</math>, and can be written as: <br />
<br />
<br />
::<math>{{\hat{\sigma }}^{2}}=\frac{S{{S}_{E}}}{n-2}\,\!</math><br />
<br />
<br />
Similarly, the regression mean square, <math>M{{S}_{R}}\,\!</math>, can be obtained by dividing the regression sum of squares by the respective degrees of freedom as follows:<br />
<br />
<br />
::<math>M{{S}_{R}}=\frac{S{{S}_{R}}}{dof(S{{S}_{R}})}=\frac{S{{S}_{R}}}{1}\,\!</math><br />
<br />
<br />
====F Test====<br />
<br />
To test the hypothesis <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math>, the statistic used is based on the <math>F\,\!</math> distribution. It can be shown that if the null hypothesis <math>{{H}_{0}}\,\!</math> is true, then the statistic:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{M{{S}_{R}}}{M{{S}_{E}}}=\frac{S{{S}_{R}}/1}{S{{S}_{E}}/(n-2)}\,\!</math><br />
<br />
<br />
follows the <math>F\,\!</math> distribution with <math>1\,\!</math> degree of freedom in the numerator and <math>(n-2)\,\!</math> degrees of freedom in the denominator. <math>{{H}_{0}}\,\!</math> is rejected if the calculated statistic, <math>{{F}_{0}}\,\!</math>, is such that:<br />
<br />
<br />
::<math>{{F}_{0}}>{{f}_{\alpha ,1,n-2}}\,\!</math><br />
<br />
<br />
where <math>{{f}_{\alpha ,1,n-2}}\,\!</math> is the percentile of the <math>F\,\!</math> distribution corresponding to a cumulative probability of (<math>1-\alpha\,\!</math>) and <math>\alpha\,\!</math> is the significance level.<br />
<br />
<br />
'''Example'''<br />
<br />
The analysis of variance approach to test the significance of regression can be applied to the yield data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]]. To calculate the statistic, <math>{{F}_{0}}\,\!</math>, for the test, the sum of squares have to be obtained. The sum of squares can be calculated as shown next.<br />
The total sum of squares can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{T}}= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}} \\ <br />
= & \underset{i=1}{\overset{25}{\mathop \sum }}\,{{({{y}_{i}}-166.32)}^{2}} \\ <br />
= & 22979.44 <br />
\end{align}\,\!</math><br />
<br />
<br />
The regression sum of squares can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{{\hat{y}}}_{i}}-\bar{y})}^{2}} \\ <br />
= & \underset{i=1}{\overset{25}{\mathop \sum }}\,{{({{{\hat{y}}}_{i}}-166.32)}^{2}} \\ <br />
= & 22607.81 <br />
\end{align}\,\!</math><br />
<br />
<br />
The error sum of squares can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{E}}= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-{{{\hat{y}}}_{i}})}^{2}} \\ <br />
= & \underset{i=1}{\overset{25}{\mathop \sum }}\,{{({{y}_{i}}-{{{\hat{y}}}_{i}})}^{2}} \\ <br />
= & 371.63 <br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing the sum of squares, the statistic to test <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> can be calculated as follows:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}}=& \frac{M{{S}_{R}}}{M{{S}_{E}}} \\ <br />
= & \frac{S{{S}_{R}}/1}{S{{S}_{E}}/(n-2)} \\ <br />
= & \frac{22607.81/1}{371.63/(25-2)} \\ <br />
= & 1399.20 <br />
\end{align}\,\!</math><br />
<br />
<br />
The critical value at a significance level of 0.1 is <math>{{f}_{0.05,1,23}}=2.937\,\!</math>. Since <math>{{f}_{0}}>{{f}_{\alpha ,1,n-2}}\,\!</math>, <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> is rejected and it is concluded that <math>{{\beta }_{1}}\,\!</math> is not zero. Alternatively, the <math>p\,\!</math> value can also be used. The <math>p\,\!</math> value corresponding to the test statistic, <math>{{f}_{0}}\,\!</math>, based on the <math>F\,\!</math> distribution with one degree of freedom in the numerator and 23 degrees of freedom in the denominator is:<br />
<br />
<br />
::<math>\begin{align}<br />
p\text{ }value= & 1-P(F\le {{f}_{0}}) \\ <br />
= & 1-0.999999 \\ <br />
= & 4.17E-22 <br />
\end{align}\,\!</math><br />
<br />
<br />
Assuming that the desired significance is 0.1, since the <math>p\,\!</math> value < 0.1, then <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> is rejected, implying that a relation does exist between temperature and yield for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]]. Using this result along with the scatter plot of the above [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| figure]], it can be concluded that the relationship that exists between temperature and yield is linear. This result is displayed in the ANOVA table as shown in the following figure. Note that this is the same result that was obtained from the <math>t\,\!</math> test in the section [[Simple_Linear_Regression_Analysis#Tests|t Tests]]. The ANOVA and Regression Information tables in Weibull++ DOE folios represent two different ways to test for the significance of the regression model. In the case of multiple linear regression models these tables are expanded to allow tests on individual variables used in the model. This is done using extra sum of squares. Multiple linear regression models and the application of extra sum of squares in the analysis of these models are discussed in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].<br />
<br />
<br />
[[Image:doe4_10.png|center|747px| ANOVA table for the data.|link=]]<br />
<br />
==Confidence Intervals in Simple Linear Regression==<br />
<br />
A confidence interval represents a closed interval where a certain percentage of the population is likely to lie. For example, a 90% confidence interval with a lower limit of <math>A\,\!</math> and an upper limit of <math>B\,\!</math> implies that 90% of the population lies between the values of <math>A\,\!</math> and <math>B\,\!</math>. Out of the remaining 10% of the population, 5% is less than <math>A\,\!</math> and 5% is greater than <math>B\,\!</math>. (For details refer to the [[Life_Data_Analysis_Reference_Book| Life Data Analysis Reference Book]].) This section discusses confidence intervals used in simple linear regression analysis.<br />
<br />
===Confidence Interval on Regression Coefficients===<br />
<br />
A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on <math>{{\beta }_{1}}\,\!</math> is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{\beta }}_{1}}\pm {{t}_{\alpha /2,n-2}}\cdot se({{\hat{\beta }}_{1}})\,\!</math><br />
<br />
<br />
Similarly, a 100 (<math>1-\alpha\,\!</math>) percent confidence interval on <math>{{\beta }_{0}}\,\!</math> is obtained as:<br />
<br />
<br />
::<math>{{\hat{\beta }}_{0}}\pm {{t}_{\alpha /2,n-2}}\cdot se({{\hat{\beta }}_{0}})\,\!</math><br />
<br />
<br />
===Confidence Interval on Fitted Values===<br />
<br />
A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on any fitted value, <math>{{\hat{y}}_{i}}\,\!</math>, is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{y}}_{i}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ \frac{1}{n}+\frac{{{({{x}_{i}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{i}}-\bar{x})}^{2}}} \right]}\,\!</math><br />
<br />
<br />
It can be seen that the width of the confidence interval depends on the value of <math>{{x}_{i}}\,\!</math> and will be a minimum at <math>{{x}_{i}}=\bar{x}\,\!</math> and will widen as <math>\left| {{x}_{i}}-\bar{x} \right|\,\!</math> increases.<br />
<br />
===Confidence Interval on New Observations===<br />
<br />
For the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]], assume that a new value of the yield is observed after the regression model is fit to the data. This new observation is independent of the observations used to obtain the regression model. If <math>{{x}_{p}}\,\!</math> is the level of the temperature at which the new observation was taken, then the estimate for this new value based on the fitted regression model is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{p}}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{p}} \\<br />
= & 17.0016+1.9952\times {{x}_{p}} \\<br />
\end{align}\,\!</math><br />
<br />
<br />
If a confidence interval needs to be obtained on <math>{{\hat{y}}_{p}}\,\!</math>, then this interval should include both the error from the fitted model and the error associated with future observations. This is because <math>{{\hat{y}}_{p}}\,\!</math> represents the estimate for a value of <math>Y\,\!</math> that was not used to obtain the regression model. The confidence interval on <math>{{\hat{y}}_{p}}\,\!</math> is referred to as the ''prediction interval''. A 100 (<math>1-\alpha\,\!</math>) percent prediction interval on a new observation is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{y}}_{p}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ 1+\frac{1}{n}+\frac{{{({{x}_{p}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{p}}-\bar{x})}^{2}}} \right]}\,\!</math><br />
<br />
<br />
'''Example'''<br />
<br />
To illustrate the calculation of confidence intervals, the 95% confidence intervals on the response at <math>x=93\,\!</math> for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] is obtained in this example. A 95% prediction interval is also obtained assuming that a new observation for the yield was made at <math>x=91\,\!</math>.<br />
<br />
The fitted value, <math>{{\hat{y}}_{i}}\,\!</math>, corresponding to <math>x=93\,\!</math> is:<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{21}}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{21}} \\ <br />
= & 17.0016+1.9952\times 93 \\ <br />
= & 202.6 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% confidence interval <math>(\alpha =0.05)\,\!</math> on the fitted value, <math>{{\hat{y}}_{21}}=202.6\,\!</math>, is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & {{{\hat{y}}}_{i}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ \frac{1}{n}+\frac{{{({{x}_{i}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{i}}-\bar{x})}^{2}}} \right]} \\ <br />
= & 202.6\pm {{t}_{0.025,23}}\sqrt{M{{S}_{E}}\left[ \frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 202.6\pm 2.069\sqrt{16.16\left[ \frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 202.6\pm 2.602 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% limits on <math>{{\hat{y}}_{21}}\,\!</math> are 199.95 and 205.2, respectively.<br />
The estimated value based on the fitted regression model for the new observation at <math>x=91\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{p}}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{p}} \\ <br />
= & 17.0016+1.9952\times 91 \\ <br />
= & 198.6 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% prediction interval on <math>{{\hat{y}}_{p}}=198.6\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & {{{\hat{y}}}_{p}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ 1+\frac{1}{n}+\frac{{{({{x}_{p}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{p}}-\bar{x})}^{2}}} \right]} \\ <br />
= & 198.6\pm {{t}_{0.025,23}}\sqrt{M{{S}_{E}}\left[ 1+\frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 198.6\pm 2.069\sqrt{16.16\left[ 1+\frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 198.6\pm 2.069\times 4.1889 \\ <br />
= & 198.6\pm 8.67 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% limits on <math>{{\hat{y}}_{p}}\,\!</math> are 189.9 and 207.2, respectively. In Weibull++ DOE folios, confidence and prediction intervals can be calculated from the control panel. The prediction interval values calculated in this example are shown in the figure below as Low Prediction Interval and High Prediction Interval, respectively. The columns labeled Mean Predicted and Standard Error represent the values of <math>{{\hat{y}}_{p}}\,\!</math> and the standard error used in the calculations. <br />
<br />
<br />
[[Image:doe4_11.png|center|786px|Calculation of prediction intervals in Weibull++.|link=]]<br />
<br />
==Measures of Model Adequacy==<br />
<br />
It is important to analyze the regression model before inferences based on the model are undertaken. The following sections present some techniques that can be used to check the appropriateness of the model for the given data. These techniques help to determine if any of the model assumptions have been violated.<br />
<br />
===Coefficient of Determination (<math>R^2 </math>)===<br />
The coefficient of determination is a measure of the amount of variability in the data accounted for by the regression model. As mentioned previously, the total variability of the data is measured by the total sum of squares, <math>SS_T\,\!</math>. The amount of this variability explained by the regression model is the regression sum of squares, <math>SS_R\,\!</math>. The coefficient of determination is the ratio of the regression sum of squares to the total sum of squares.<br />
<br />
<br />
::<math>R^2 = \frac{SS_R}{SS_T}\,\!</math><br />
<br />
<br />
<math>R^2\,\!</math> can take on values between 0 and 1 since <math>R^2 = \frac{SS_R}{SS_T}\,\!</math> . For the yield data example, <math>R^2\,\!</math> can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
{{R}^{2}}= & \frac{S{{S}_{R}}}{S{{S}_{T}}} \\ <br />
= & \frac{22607.81}{22979.44} \\ <br />
= & 0.98 <br />
\end{align}\,\!</math><br />
<br />
<br />
<br />
Therefore, 98% of the variability in the yield data is explained by the regression model, indicating a very good fit of the model. It may appear that larger values of <math>{{R}^{2}}\,\!</math> indicate a better fitting regression model. However, <math>{{R}^{2}}\,\!</math> should be used cautiously as this is not always the case. The value of <math>{{R}^{2}}\,\!</math> increases as more terms are added to the model, even if the new term does not contribute significantly to the model. Therefore, an increase in the value of <math>{{R}^{2}}\,\!</math> cannot be taken as a sign to conclude that the new model is superior to the older model. Adding a new term may make the regression model worse if the error mean square, <math>M{{S}_{E}}\,\!</math>, for the new model is larger than the <math>M{{S}_{E}}\,\!</math> of the older model, even though the new model will show an increased value of <math>{{R}^{2}}\,\!</math>. In the results obtained from the DOE folio, <math>{{R}^{2}}\,\!</math> is displayed as R-sq under the ANOVA table (as shown in the figure below), which displays the complete analysis sheet for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]].<br />
<br />
The other values displayed with are S, R-sq(adj), PRESS and R-sq(pred). These values measure different aspects of the adequacy of the regression model. For example, the value of S is the square root of the error mean square, <math>MS_E\,\!</math>, and represents the "standard error of the model." A lower value of S indicates a better fitting model. The values of S, R-sq and R-sq(adj) indicate how well the model fits the observed data. The values of PRESS and R-sq(pred) are indicators of how well the regression model predicts new observations. R-sq(adj), PRESS and R-sq(pred) are explained in [[Multiple Linear Regression Analysis]].<br />
<br />
<br />
[[Image:doe4_12.png|center|874px|Complete analysis for the data.|link=]]<br />
<br />
===Residual Analysis===<br />
In the simple linear regression model the true error terms, <math>{{\epsilon }_{i}}\,\!</math>, are never known. The residuals, <math>{{e}_{i}}\,\!</math>, may be thought of as the observed error terms that are similar to the true error terms. Since the true error terms, <math>{{\epsilon }_{i}}\,\!</math>, are assumed to be normally distributed with a mean of zero and a variance of <math>{{\sigma }^{2}}\,\!</math>, in a good model the observed error terms (i.e., the residuals, <math>{{e}_{i}}\,\!</math>) should also follow these assumptions. Thus the residuals in the simple linear regression should be normally distributed with a mean of zero and a constant variance of <math>{{\sigma }^{2}}\,\!</math>. Residuals are usually plotted against the fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, against the predictor variable values, <math>{{x}_{i}}\,\!</math>, and against time or run-order sequence, in addition to the normal probability plot. Plots of residuals are used to check for the following:<br />
<br />
<br />
:1. Residuals follow the normal distribution. <br />
:2. Residuals have a constant variance. <br />
:3. Regression function is linear. <br />
:4. A pattern does not exist when residuals are plotted in a time or run-order sequence. <br />
:5. There are no outliers. <br />
<br />
<br />
Examples of residual plots are shown in the following figure. (a) is a satisfactory plot with the residuals falling in a horizontal band with no systematic pattern. Such a plot indicates an appropriate regression model. (b) shows residuals falling in a funnel shape. Such a plot indicates increase in variance of residuals and the assumption of constant variance is violated here. Transformation on <math>Y\,\!</math> may be helpful in this case (see [[Simple_Linear_Regression_Analysis#Transformations| Transformations]]). If the residuals follow the pattern of (c) or (d), then this is an indication that the linear regression model is not adequate. Addition of higher order terms to the regression model or transformation on <math>x\,\!</math> or <math>Y\,\!</math> may be required in such cases. A plot of residuals may also show a pattern as seen in (e), indicating that the residuals increase (or decrease) as the run order sequence or time progresses. This may be due to factors such as operator-learning or instrument-creep and should be investigated further. <br />
<br />
<br />
[[Image:doe4.13.png|center|550px|Possible residual plots (against fitted values, time or run-order) that can be obtained from simple linear regression analysis.|link=]] <br />
<br />
<br />
'''Example'''<br />
<br />
Residual plots for the data of the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] are shown in the following figures. One of the following figures is the normal probability plot. It can be observed that the residuals follow the normal distribution and the assumption of normality is valid here. In one of the following figures the residuals are plotted against the fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, and in one of the following figures the residuals are plotted against the run order. Both of these plots show that the 21st observation seems to be an outlier. Further investigations are needed to study the cause of this outlier. <br />
<br />
<br />
[[Image:doe4_14.png|center|650px|Normal probability plot of residuals for the data.|link=]]<br />
<br />
<br />
[[Image:doe4_15.png|center|650px|Plot of residuals against fitted values for the data.|link=]]<br />
<br />
<br />
[[Image:doe4_16.png|center|650px|Plot of residuals against run order for the data.|link=]]<br />
<br />
===Lack-of-Fit Test===<br />
<br />
As mentioned in [[Simple_Linear_Regression_Analysis#Analysis_of_Variance_Approach_to_Test_the_Significance_of_Regression| Analysis of Variance Approach]], ANOVA, a perfect regression model results in a fitted line that passes exactly through all observed data points. This perfect model will give us a zero error sum of squares (<math>S{{S}_{E}}=0\,\!</math>). Thus, no error exists for the perfect model. However, if you record the response values for the same values of <math>{{x}_{i}}\,\!</math> for a second time, in conditions maintained as strictly identical as possible to the first time, observations from the second time will not all fall along the perfect model. The deviations in observations recorded for the second time constitute the "purely" random variation or noise. The sum of squares due to pure error (abbreviated <math>S{{S}_{PE}}\,\!</math>) quantifies these variations. <math>S{{S}_{PE}}\,\!</math> is calculated by taking repeated observations at some or all values of <math>{{x}_{i}}\,\!</math> and adding up the square of deviations at each level of <math>x\,\!</math> using the respective repeated observations at that <math>x\,\!</math> value. <br />
<br />
Assume that there are <math>n\,\!</math> levels of <math>x\,\!</math> and <math>{{m}_{i}}\,\!</math> repeated observations are taken at each <math>i\,\!</math> the level. The data is collected as shown next:<br />
<br />
<br />
::<math>\begin{align}<br />
& {{y}_{11}},{{y}_{12}},....,{{y}_{1{{m}_{1}}}}\text{ repeated observations at }{{x}_{1}} \\ <br />
& {{y}_{21}},{{y}_{22}},....,{{y}_{2{{m}_{2}}}}\text{ repeated observations at }{{x}_{2}} \\ <br />
& ... \\ <br />
& {{y}_{i1}},{{y}_{i2}},....,{{y}_{i{{m}_{i}}}}\text{ repeated observations at }{{x}_{i}} \\ <br />
& ... \\ <br />
& {{y}_{n1}},{{y}_{n2}},....,{{y}_{n{{m}_{n}}}}\text{ repeated observations at }{{x}_{n}} <br />
\end{align}\,\!</math><br />
<br />
<br />
The sum of squares of the deviations from the mean of the observations at <math>i\,\!</math> the level of <math>x\,\!</math>, <math>{{x}_{i}}\,\!</math>, can be calculated as:<br />
<br />
<br />
::<math>\underset{j=1}{\overset{{{m}_{i}}}{\mathop \sum }}\,{{({{y}_{ij}}-{{\bar{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
where <math>{{\bar{y}}_{i}}\,\!</math> is the mean of the <math>{{m}_{i}}\,\!</math> repeated observations corresponding to <math>{{x}_{i}}\,\!</math> (<math>{{\bar{y}}_{i}}=(1/{{m}_{i}})\mathop{}_{j=1}^{{{m}_{i}}}{{y}_{ij}}\,\!</math>). The number of degrees of freedom for these deviations is (<math>{{m}_{i}}-1\,\!</math> ) as there are <math>{{m}_{i}}\,\!</math> observations at <math>i\,\!</math> the level of <math>x\,\!</math> but one degree of freedom is lost in calculating the mean, <math>{{\bar{y}}_{i}}\,\!</math>.<br />
<br />
The total sum of square deviations (or <math>S{{S}_{PE}}\,\!</math>) for all levels of <math>x\,\!</math> can be obtained by summing the deviations for all <math>{{x}_{i}}\,\!</math> as shown next:<br />
<br />
<br />
::<math>S{{S}_{PE}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,\underset{j=1}{\overset{{{m}_{i}}}{\mathop \sum }}\,{{({{y}_{ij}}-{{\bar{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
The total number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & \underset{i=1}{\overset{n}{\mathop \sum }}\,({{m}_{i}}-1) \\ <br />
= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{m}_{i}}-n <br />
\end{align}\,\!</math><br />
<br />
<br />
If all <math>{{m}_{i}}=m\,\!</math>, (i.e., <math>m\,\!</math> repeated observations are taken at all levels of <math>x\,\!</math>), then <math>\mathop{}_{i=1}^{n}{{m}_{i}}=nm\,\!</math> and the degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> are: <br />
<br />
<br />
::<math>=nm-n\,\!</math><br />
<br />
<br />
The corresponding mean square in this case will be:<br />
<br />
<br />
::<math>M{{S}_{PE}}=\frac{S{{S}_{PE}}}{nm-n}\,\!</math><br />
<br />
<br />
When repeated observations are used for a perfect regression model, the sum of squares due to pure error, <math>S{{S}_{PE}}\,\!</math>, is also considered as the error sum of squares, <math>S{{S}_{E}}\,\!</math>. For the case when repeated observations are used with imperfect regression models, there are two components of the error sum of squares, <math>S{{S}_{E}}\,\!</math>. One portion is the pure error due to the repeated observations. The other portion is the error that represents variation not captured because of the imperfect model. The second portion is termed as the sum of squares due to lack-of-fit (abbreviated <math>S{{S}_{LOF}}\,\!</math>) to point to the deficiency in fit due to departure from the perfect-fit model. Thus, for an imperfect regression model:<br />
<br />
<br />
::<math>S{{S}_{E}}=S{{S}_{PE}}+S{{S}_{LOF}}\,\!</math><br />
<br />
<br />
Knowing <math>S{{S}_{E}}\,\!</math> and <math>S{{S}_{PE}}\,\!</math>, the previous equation can be used to obtain <math>S{{S}_{LOF}}\,\!</math>:<br />
<br />
<br />
::<math>S{{S}_{LOF}}=S{{S}_{E}}-S{{S}_{PE}}\,\!</math><br />
<br />
<br />
The degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> can be obtained in a similar manner using subtraction. For the case when <math>m\,\!</math> repeated observations are taken at all levels of <math>x\,\!</math>, the number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> is:<br />
<br />
<br />
::<math>dof(S{{S}_{PE}})=nm-n\,\!</math><br />
<br />
<br />
Since there are <math>nm\,\!</math> total observations, the number of degrees of freedom associated with <math>S{{S}_{E}}\,\!</math> is:<br />
<br />
<br />
::<math>dof(S{{S}_{E}})=nm-2\,\!</math><br />
<br />
<br />
Therefore, the number of degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & dof(S{{S}_{E}})-dof(S{{S}_{PE}}) \\ <br />
= & (nm-2)-(nm-n) \\ <br />
= & n-2 <br />
\end{align}\,\!</math><br />
<br />
<br />
The corresponding mean square, <math>M{{S}_{LOF}}\,\!</math>, can now be obtained as:<br />
<br />
<br />
::<math>M{{S}_{LOF}}=\frac{S{{S}_{LOF}}}{n-2}\,\!</math><br />
<br />
<br />
The magnitude of <math>S{{S}_{LOF}}\,\!</math> or <math>M{{S}_{LOF}}\,\!</math> will provide an indication of how far the regression model is from the perfect model. An <math>F\,\!</math> test exists to examine the lack-of-fit at a particular significance level. The quantity <math>M{{S}_{LOF}}/M{{S}_{PE}}\,\!</math> follows an <math>F\,\!</math> distribution with <math>(n-2)\,\!</math> degrees of freedom in the numerator and <math>(nm-n)\,\!</math> degrees of freedom in the denominator when all <math>{{m}_{i}}\,\!</math> equal <math>m\,\!</math>. The test statistic for the lack-of-fit test is:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{M{{S}_{LOF}}}{M{{S}_{PE}}}\,\!</math><br />
<br />
<br />
If the critical value <math>{{f}_{\alpha ,n-2,mn-n}}\,\!</math> is such that:<br />
<br />
<br />
::<math>{{F}_{0}}>{{f}_{\alpha ,n-2,nm-n}}\,\!</math><br />
<br />
<br />
it will lead to the rejection of the hypothesis that the model adequately fits the data.<br />
<br />
<br />
'''Example'''<br />
<br />
Assume that a second set of observations are taken for the yield data of the preceding [http://reliawiki.org/index.php/Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]. The resulting observations are recorded in the following table. To conduct a lack-of-fit test on this data, the statistic <math>{{F}_{0}}=M{{S}_{LOF}}/M{{S}_{PE}}\,\!</math>, can be calculated as shown next.<br />
<br />
[[Image:doet4.2.png|center|436px|Yield data from the first and second observation sets for the chemical process example in the Introduction.|link=]] <br />
<br />
<br />
'''Calculation of Least Square Estimates'''<br />
<br />
<br />
The parameters of the fitted regression model can be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
<br />
{{{\hat{\beta }}}_{1}} = & \frac{\underset{i=1}{\overset{50}{\mathop \sum }}\,{{y}_{i}}{{x}_{i}}-\frac{\left( \underset{i=1}{\overset{50}{\mathop \sum }}\,{{y}_{i}} \right)\left( \underset{i=1}{\overset{50}{\mathop \sum }}\,{{x}_{i}} \right)}{50}}{\underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{x}_{i}}-\bar{x})}^{2}}} \\ <br />
= & \frac{648532-\frac{8356\times 3742}{50}}{11358.72} \\ <br />
= & 2.04 \end{align}\,\!</math><br />
<br />
<br />
::<math>\begin{align} <br />
{{{\hat{\beta }}}_{0}}= & \bar{y}-{{{\hat{\beta }}}_{1}}\bar{x} \\ <br />
= & 167.12-2.04\times 74.84 \\ <br />
= & 14.47 <br />
<br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing <math>{{\hat{\beta }}_{1}}\,\!</math> and <math>{{\hat{\beta }}_{0}}\,\!</math>, the fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, can be calculated.<br />
<br />
<br />
'''Calculation of the Sum of Squares'''<br />
<br />
Using the fitted values, the sum of squares can be obtained as follows:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{T}} = & \underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}} \\ <br />
= & 47907.28 \end{align}\,\!</math><br />
<br />
<br />
::<math>\begin{align} <br />
S{{S}_{R}} = & \underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{{\hat{y}}}_{i}}-\bar{y})}^{2}} \\ <br />
= & 47258.91 \end{align}<br />
\,\!</math><br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{E}} = & \underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{y}_{i}}-{{{\hat{y}}}_{i}})}^{2}} \\ <br />
= & 648.37 \end{align}<br />
\,\!</math><br />
<br />
<br />
'''Calculation of <math>M{{S}_{LOF}}\,\!</math>'''<br />
<br />
<br />
The error sum of squares, <math>S{{S}_{E}}\,\!</math>, can now be split into the sum of squares due to pure error, <math>S{{S}_{PE}}\,\!</math>, and the sum of squares due to lack-of-fit, <math>S{{S}_{LOF}}\,\!</math>. <math>S{{S}_{PE}}\,\!</math> can be calculated as follows considering that in this example <math>n=25\,\!</math> and <math>m=2\,\!</math>:<br />
<br />
<br />
::<math><br />
<br />
\begin{align}<br />
S{{S}_{PE}} & = \underset{i=1}{\overset{n}{\mathop \sum }}\,\underset{j=1}{\overset{{{m}_{i}}}{\mathop \sum }}\,{{({{y}_{ij}}-{{{\bar{y}}}_{i}})}^{2}} \\ <br />
& = \underset{i=1}{\overset{25}{\mathop \sum }}\,\underset{j=1}{\overset{2}{\mathop \sum }}\,{{({{y}_{ij}}-{{{\bar{y}}}_{i}})}^{2}} \\ <br />
& = 350 <br />
\end{align}\,\!<br />
<br />
</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
dof(S{{S}_{PE}}) & = nm-n \\ <br />
& = 25\times 2-25 \\ <br />
& = 25 <br />
\end{align}\,\!</math><br />
<br />
<br />
The corresponding mean square, <math>M{{S}_{PE}}\,\!</math>, can now be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
M{{S}_{PE}} & = \frac{S{{S}_{PE}}}{dof(S{{S}_{PE}})} \\ <br />
& = \frac{350}{25} \\ <br />
& = 14 <br />
\end{align}\,\!</math><br />
<br />
<br />
<math>S{{S}_{LOF}}\,\!</math> can be obtained by subtraction from <math>S{{S}_{E}}\,\!</math> as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{LOF}} & = S{{S}_{E}}-S{{S}_{PE}} \\ <br />
& = 648.37-350 \\ <br />
& = 298.37 <br />
\end{align}\,\!</math><br />
<br />
<br />
Similarly, the number of degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
dof(S{{S}_{LOF}}) & = dof(S{{S}_{E}})-dof(S{{S}_{PE}}) \\ <br />
& = (nm-2)-(nm-n) \\ <br />
& = 23 <br />
\end{align}\,\!</math><br />
<br />
<br />
The lack-of-fit mean square is:<br />
<br />
<br />
::<math>\begin{align}<br />
M{{S}_{LOF}} & = \frac{M{{S}_{LOF}}}{dof(M{{S}_{LOF}})} \\ <br />
& = \frac{298.37}{23} \\ <br />
& = 12.97 <br />
\end{align}\,\!</math><br />
<br />
<br />
'''Calculation of the Test Statistic'''<br />
<br />
<br />
The test statistic for the lack-of-fit test can now be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}} & = \frac{M{{S}_{LOF}}}{M{{S}_{PE}}} \\ <br />
& = \frac{12.97}{14} \\ <br />
& = 0.93 <br />
\end{align}\,\!</math><br />
<br />
<br />
The critical value for this test is:<br />
<br />
<br />
::<math>{{f}_{0.05,23,25}}=1.97\,\!</math><br />
<br />
<br />
Since <math>{{f}_{0}}<{{f}_{0.05,23,25}}\,\!</math>, we fail to reject the hypothesis that the model adequately fits the data. The <math>p\,\!</math> value for this case is:<br />
<br />
<br />
::<math>\begin{align}<br />
p\text{ }value & = 1-P(F\le {{f}_{0}}) \\ <br />
& = 1-0.43 \\ <br />
& = 0.57 <br />
\end{align}\,\!</math><br />
<br />
<br />
Therefore, at a significance level of 0.05 we conclude that the simple linear regression model, <math>y=14.47+2.04x\,\!</math>, is adequate for the observed data. The following table presents a summary of the ANOVA calculations for the lack-of-fit test.<br />
<br />
<br />
[[Image:doe4.18.png|center|700px|ANOVA table for the lack-of-fit test of the yield data example.]]<br />
<br />
==Transformations==<br />
The linear regression model may not be directly applicable to certain data. Non-linearity may be detected from scatter plots or may be known through the underlying theory of the product or process or from past experience. Transformations on either the predictor variable, <math>x\,\!</math>, or the response variable, <math>Y\,\!</math>, may often be sufficient to make the linear regression model appropriate for the transformed data.<br />
If it is known that the data follows the logarithmic distribution, then a logarithmic transformation on <math>Y\,\!</math> (i.e., <math>{{Y}^{*}}=\log (Y)\,\!</math>) might be useful. For data following the Poisson distribution, a square root transformation (<math>{{Y}^{*}}=\sqrt{Y}\,\!</math>) is generally applicable.<br />
<br />
Transformations on <math>Y\,\!</math> may also be applied based on the type of scatter plot obtained from the data. The following figure shows a few such examples. <br />
<br />
<br />
[[Image:doe4.17.png|center|500px|Transformations on for a few possible scatter plots. Plot (a) may require a square root transformation, (b) may require a logarithmic transformation and (c) may require a reciprocal transformation.|link=]]<br />
<br />
<br />
For the scatter plot labeled (a), a square root transformation (<math>{{Y}^{*}}=\sqrt{Y}\,\!</math>) is applicable. While for the plot labeled (b), a logarithmic transformation (i.e., <math>{{Y}^{*}}=\log (Y)\,\!</math>) may be applied. For the plot labeled (c), the reciprocal transformation (<math>{{Y}^{*}}=1/Y\,\!</math>) is applicable. At times it may be helpful to introduce a constant into the transformation of <math>Y\,\!</math>. For example, if <math>Y\,\!</math> is negative and the logarithmic transformation on <math>Y</math> seems applicable, a suitable constant, <math>k\,\!</math>, may be chosen to make all observed <math>Y\,\!</math> positive. Thus the transformation in this case would be <math>{{Y}^{*}}=\log (k+Y)\,\!</math> .<br />
<br />
The Box-Cox method may also be used to automatically identify a suitable power transformation for the data based on the relation:<br />
<br />
<br />
::<math>{{Y}^{*}}={{Y}^{\lambda }}\,\!</math><br />
<br />
<br />
Here the parameter <math>\lambda\,\!</math> is determined using the given data such that <math>S{{S}_{E}}\,\!</math> is minimized (details on this method are presented in [[One Factor Designs]]).</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:Bsbook&diff=65376Template:Bsbook2018-08-09T22:38:03Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(114,159,113); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:BlockSimbox.png|100px|link=System_Analysis_Reference]] <br>{{font|[[System Analysis Reference|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span><br />
[[Image:synthesis-icon.png|link=https://koi-3QN72QORVC.marketingautomation.services/net/m?md=rn38ZvdjPeN10QXqTVSp0eMElPXxiv%2Fp|left]]<p style="text-align: left;">'''Available Software:''' <br>[https://koi-3QN72QORVC.marketingautomation.services/net/m?md=rn38ZvdjPeN10QXqTVSp0eMElPXxiv%2Fp BlockSim]</p><br />
[[Image:Examples_icon.png|link=BlockSim_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[BlockSim Examples|BlockSim Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf System Analysis (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/System_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/System_Analysis_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
<includeonly></includeonly><noinclude>{{Template:Bsbook/documentation}}</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:Allbooksindex_footer&diff=65374Template:Allbooksindex footer2018-08-09T22:33:27Z<p>Chuck Smith: </p>
<hr />
<div><hr><br />
[[File:examples_heading.png|link={{{1|Weibull++ Examples}}}]]<br />
As a supplement to the reference book, the [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR {{{2|Weibull++}}}] examples collection provides quick access to a variety of step-by-step examples that demonstrate how you can put the capabilities of [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR {{{2|Weibull++}}}] to work for you. Some of these examples also appear in the reference book. Others have been published in other locations, such as [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Yy%2FTDMQctw6K8FJ9AwXWkxlXSIiWl4Fe www.ReliaSoft.com].</p><noinclude>{{Template:Allbooksindex footer/documentation}}[[Category: Banners]]</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:Allbooksindex_footer&diff=65373Template:Allbooksindex footer2018-08-09T22:32:09Z<p>Chuck Smith: </p>
<hr />
<div><hr><br />
[[File:examples_heading.png|link={{{1|Weibull++ Examples}}}]]<br />
As a supplement to the reference book, the {{{2|Weibull++}}} examples collection provides quick access to a variety of step-by-step examples that demonstrate how you can put the capabilities of {{{2|Weibull++}}} to work for you. Some of these examples also appear in the reference book. Others have been published in other locations, such as [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Yy%2FTDMQctw6K8FJ9AwXWkxlXSIiWl4Fe www.ReliaSoft.com].</p><noinclude>{{Template:Allbooksindex footer/documentation}}[[Category: Banners]]</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Main_Page&diff=65370Main Page2018-08-09T22:24:35Z<p>Chuck Smith: </p>
<hr />
<div>{{DISPLAYTITLE:ReliaWiki}} __NOTOC__ __NOEDITSECTION__ <br />
<div style="position:relative; float:left; display:block; width:100%; margin:10px;"><br />
ReliaWiki is owned and maintained by [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=p8TH7lvo8O%2FAHl72KXuPWlZmIR3P7GFj HBM Prenscia] and is an extension of &nbsp;[https://koi-3QN72QORVC.marketingautomation.services/net/m?md=vQQfu%2FkoXaOAQyVYZnqaSTIPBB24C0kx weibull.com]. <!--For additional resources, visit [http://www.reliasoft.tv ReliaSoft.tv], [http://www.reliability-discussion.com/ Reliability Discussion Forum] and the [http://www.reliabilityprofessional.org/ Certified Reliability Professional (CRP) Program]. --><br />
</div><br />
<br />
<div style="position:relative; float:left; width:100%;"><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=blue_triangle.png<br />
|title=Life Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Life_Data_Analysis_Reference_Book|text=Reference Book}}<br />
{{TitleBoxLink|link=Weibull++_Examples|text=Weibull++ Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=green_triangle.png<br />
|title=System Analysis (RBDs and Fault Trees)<br />
|links=<br />
{{TitleBoxLink|link=System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=BlockSim_Examples|text=BlockSim Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=red_triangle.png<br />
|title=Reliability Growth and Repairable System Analysis<br />
|links=<br />
{{TitleBoxLink|link=Reliability_Growth_and_Repairable_System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=RGA_Examples|text=RGA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=rcm_triangle.png<br />
|title=Reliability Centered Maintenance (RCM)<br />
|links=<br />
{{TitleBoxLink|link=RCM%2B%2B_Examples|text=RCM++ Software Examples}}<br />
}}<br />
</div><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=yellow_triangle.png<br />
|title=Accelerated Life Testing Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Accelerated Life Testing Data Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=ALTA_Examples|text=ALTA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=doe_triangle.png<br />
|title=Experiment Design and Analysis (DOE)<br />
|links=<br />
{{TitleBoxLink|link=Experiment_Design_and_Analysis_Reference|text=Reference Book}}<br />
<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=fmea_triangle.png<br />
|title=Failure Modes &amp; Effects Analysis (FMEA)<br />
|links=<br />
{{TitleBoxLink|link=FMEA_and_RCM_Articles|text=Articles}}<br />
{{TitleBoxLink|link=Xfmea_Examples|text=Xfmea Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=api_triangle.png<br />
|title=Synthesis API<br />
|links=<br />
{{TitleBoxLink|link=Synthesis API Reference|text=API Reference}}<br />
{{TitleBoxLink|link=API_Changelog|text=API Changelog}}<br />
}}<br />
</div><br />
</div><br />
<div style="position:relative; float:left; width:100%;"><br />
<br><br><cshow logged="1">This text will appear if a user with membership to 'sysop' group views this page</cshow> {{ReliaSoft Footer}}<br />
</div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Multiple_Linear_Regression_Analysis&diff=65369Multiple Linear Regression Analysis2018-08-09T22:22:29Z<p>Chuck Smith: </p>
<hr />
<div>{{Template:Doebook|4}}<br />
This chapter expands on the analysis of simple linear regression models and discusses the analysis of multiple linear regression models. A major portion of the results displayed in [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++] DOE folios are explained in this chapter because these results are associated with multiple linear regression. One of the applications of multiple linear regression models is Response Surface Methodology (RSM). RSM is a method used to locate the optimum value of the response and is one of the final stages of experimentation. It is discussed in [[Response_Surface_Methods_for_Optimization| Response Surface Methods]]. Towards the end of this chapter, the concept of using indicator variables in regression models is explained. Indicator variables are used to represent qualitative factors in regression models. The concept of using indicator variables is important to gain an understanding of ANOVA models, which are the models used to analyze data obtained from experiments. These models can be thought of as first order multiple linear regression models where all the factors are treated as qualitative factors. ANOVA models are discussed in the [[One Factor Designs]] and [[General Full Factorial Designs]] chapters.<br />
<br />
==Multiple Linear Regression Model==<br />
<br />
A linear regression model that contains more than one predictor variable is called a ''multiple linear regression model''. The following model is a multiple linear regression model with two predictor variables, <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math>. <br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
The model is linear because it is linear in the parameters <math>{{\beta }_{0}}\,\!</math>, <math>{{\beta }_{1}}\,\!</math> and <math>{{\beta }_{2}}\,\!</math>. The model describes a plane in the three-dimensional space of <math>Y\,\!</math>, <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math>. The parameter <math>{{\beta }_{0}}\,\!</math> is the intercept of this plane. Parameters <math>{{\beta }_{1}}\,\!</math> and <math>{{\beta }_{2}}\,\!</math> are referred to as ''partial regression coefficients''. Parameter <math>{{\beta }_{1}}\,\!</math> represents the change in the mean response corresponding to a unit change in <math>{{x}_{1}}\,\!</math> when <math>{{x}_{2}}\,\!</math> is held constant. Parameter <math>{{\beta }_{2}}\,\!</math> represents the change in the mean response corresponding to a unit change in <math>{{x}_{2}}\,\!</math> when <math>{{x}_{1}}\,\!</math> is held constant. <br />
Consider the following example of a multiple linear regression model with two predictor variables, <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math> :<br />
<br />
<br />
::<math>Y=30+5{{x}_{1}}+7{{x}_{2}}+\epsilon \,\!</math><br />
<br />
<br />
This regression model is a first order multiple linear regression model. This is because the maximum power of the variables in the model is 1. (The regression plane corresponding to this model is shown in the figure below.) Also shown is an observed data point and the corresponding random error, <math>\epsilon\,\!</math>. The true regression model is usually never known (and therefore the values of the random error terms corresponding to observed data points remain unknown). However, the regression model can be estimated by calculating the parameters of the model for an observed data set. This is explained in [[Multiple_Linear_Regression_Analysis#Estimating_Regression_Models_Using_Least_Squares| Estimating Regression Models Using Least Squares]].<br />
<br />
One of the following figures shows the contour plot for the regression model the above equation. The contour plot shows lines of constant mean response values as a function of <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math>. The contour lines for the given regression model are straight lines as seen on the plot. Straight contour lines result for first order regression models with no interaction terms.<br />
<br />
A linear regression model may also take the following form:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
A cross-product term, <math>{{x}_{1}}{{x}_{2}}\,\!</math>, is included in the model. This term represents an interaction effect between the two variables <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math>. Interaction means that the effect produced by a change in the predictor variable on the response depends on the level of the other predictor variable(s). As an example of a linear regression model with interaction, consider the model given by the equation <math>Y=30+5{{x}_{1}}+7{{x}_{2}}+3{{x}_{1}}{{x}_{2}}+\epsilon\,\!</math>. The regression plane and contour plot for this model are shown in the following two figures, respectively.<br />
<br />
<br />
[[Image:doe5.1.png|center|437px|Regression plane for the model <math>Y=30+5 x_1+7 x_2+\epsilon\,\!</math>]]<br />
<br />
<br />
[[Image:doe5.2.png|center|337px|Countour plot for the model <math>Y=30+5 x_1+7 x_2+\epsilon\,\!</math>]]<br />
<br />
<br />
Now consider the regression model shown next:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}x_{1}^{2}+{{\beta }_{3}}x_{1}^{3}+\epsilon\,\!</math><br />
<br />
<br />
This model is also a linear regression model and is referred to as a ''polynomial regression model''. Polynomial regression models contain squared and higher order terms of the predictor variables making the response surface curvilinear. As an example of a polynomial regression model with an interaction term consider the following equation:<br />
<br />
<br />
::<math>Y=500+5{{x}_{1}}+7{{x}_{2}}-3x_{1}^{2}-5x_{2}^{2}+3{{x}_{1}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
This model is a ''second order'' model because the maximum power of the terms in the model is two. The regression surface for this model is shown in the following figure. Such regression models are used in RSM to find the optimum value of the response, <math>Y\,\!</math> (for details see [[Response_Surface_Methods_for_Optimization| Response Surface Methods for Optimization]]). Notice that, although the shape of the regression surface is curvilinear, the regression model is still linear because the model is linear in the parameters. The contour plot for this model is shown in the second of the following two figures.<br />
<br />
<br />
[[Image:doe5.3.png|center|437px|Regression plane for the model <math>Y=30+5 x_1+7 x_2+3 x_1 x_2+\epsilon\,\!</math>]]<br />
<br />
<br />
[[Image:doe5.4.png|center|337px|Countour plot for the model <math>Y=30+5 x_1+7 x_2+3 x_1 x_2+\epsilon\,\!</math>]]<br />
<br />
<br />
All multiple linear regression models can be expressed in the following general form:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+...+{{\beta }_{k}}{{x}_{k}}+\epsilon\,\!</math><br />
<br />
<br />
where <math>k\,\!</math> denotes the number of terms in the model. For example, the model can be written in the general form using <math>{{x}_{3}}=x_{1}^{2}\,\!</math>, <math>{{x}_{4}}=x_{2}^{3}\,\!</math> and <math>{{x}_{5}}={{x}_{1}}{{x}_{2}}\,\!</math> as follows:<br />
<br />
<br />
::<math>Y=500+5{{x}_{1}}+7{{x}_{2}}-3{{x}_{3}}-5{{x}_{4}}+3{{x}_{5}}+\epsilon\,\!</math><br />
<br />
==Estimating Regression Models Using Least Squares==<br />
<br />
Consider a multiple linear regression model with <math>k\,\!</math> predictor variables:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+...+{{\beta }_{k}}{{x}_{k}}+\epsilon\,\!</math><br />
<br />
<br />
Let each of the <math>k\,\!</math> predictor variables, <math>{{x}_{1}}\,\!</math>, <math>{{x}_{2}}\,\!</math>... <math>{{x}_{k}}\,\!</math>, have <math>n\,\!</math> levels. Then <math>{{x}_{ij}}\,\!</math> represents the <math>i\,\!</math> th level of the <math>j\,\!</math> th predictor variable <math>{{x}_{j}}\,\!</math>. For example, <math>{{x}_{51}}\,\!</math> represents the fifth level of the first predictor variable <math>{{x}_{1}}\,\!</math>, while <math>{{x}_{19}}\,\!</math> represents the first level of the ninth predictor variable, <math>{{x}_{9}}\,\!</math>. Observations, <math>{{y}_{1}}\,\!</math>, <math>{{y}_{2}}\,\!</math>... <math>{{y}_{n}}\,\!</math>, recorded for each of these <math>n\,\!</math> levels can be expressed in the following way:<br />
<br />
<br />
::<math>\begin{align}<br />
{{y}_{1}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{11}}+{{\beta }_{2}}{{x}_{12}}+...+{{\beta }_{k}}{{x}_{1k}}+{{\epsilon }_{1}} \\ <br />
{{y}_{2}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{21}}+{{\beta }_{2}}{{x}_{22}}+...+{{\beta }_{k}}{{x}_{2k}}+{{\epsilon }_{2}} \\ <br />
& .. \\ <br />
{{y}_{i}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+...+{{\beta }_{k}}{{x}_{ik}}+{{\epsilon }_{i}} \\ <br />
& .. \\ <br />
{{y}_{n}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{n1}}+{{\beta }_{2}}{{x}_{n2}}+...+{{\beta }_{k}}{{x}_{nk}}+{{\epsilon }_{n}} <br />
\end{align}\,\!</math><br />
<br />
<br />
The system of <math>n\,\!</math> equations shown previously can be represented in matrix notation as follows:<br />
<br />
<br />
::<math>y=X\beta +\epsilon\,\!</math><br />
<br />
<br />
:where<br />
<br />
<br />
::<math>y=\left[ \begin{matrix}<br />
{{y}_{1}} \\<br />
{{y}_{2}} \\<br />
. \\<br />
. \\<br />
. \\<br />
{{y}_{n}} \\<br />
\end{matrix} \right]\text{ }X=\left[ \begin{matrix}<br />
1 & {{x}_{11}} & {{x}_{12}} & . & . & . & {{x}_{1n}} \\<br />
1 & {{x}_{21}} & {{x}_{22}} & . & . & . & {{x}_{2n}} \\<br />
. & . & . & {} & {} & {} & . \\<br />
. & . & . & {} & {} & {} & . \\<br />
. & . & . & {} & {} & {} & . \\<br />
1 & {{x}_{n1}} & {{x}_{n2}} & . & . & . & {{x}_{nn}} \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
::<math>\beta =\left[ \begin{matrix}<br />
{{\beta }_{0}} \\<br />
{{\beta }_{1}} \\<br />
. \\<br />
. \\<br />
. \\<br />
{{\beta }_{n}} \\<br />
\end{matrix} \right]\text{ and }\epsilon =\left[ \begin{matrix}<br />
{{\epsilon }_{1}} \\<br />
{{\epsilon }_{2}} \\<br />
. \\<br />
. \\<br />
. \\<br />
{{\epsilon }_{n}} \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
The matrix <math>X\,\!</math> is referred to as the ''design matrix''. It contains information about the levels of the predictor variables at which the observations are obtained. The vector <math>\beta\,\!</math> contains all the regression coefficients. To obtain the regression model, <math>\beta\,\!</math> should be known. <math>\beta\,\!</math> is estimated using least square estimates. The following equation is used:<br />
<br />
<br />
::<math>\hat{\beta }={{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y\,\!</math><br />
<br />
<br />
where <math>^{\prime }\,\!</math> represents the transpose of the matrix while <math>^{-1}\,\!</math> represents the matrix inverse. Knowing the estimates, <math>\hat{\beta }\,\!</math>, the multiple linear regression model can now be estimated as:<br />
<br />
<br />
::<math>\hat{y}=X\hat{\beta }\,\!</math><br />
<br />
<br />
The estimated regression model is also referred to as the ''fitted model''. The observations, <math>{{y}_{i}}\,\!</math>, may be different from the fitted values <math>{{\hat{y}}_{i}}\,\!</math> obtained from this model. The difference between these two values is the residual, <math>{{e}_{i}}\,\!</math>. The vector of residuals, <math>e\,\!</math>, is obtained as:<br />
<br />
<br />
::<math>e=y-\hat{y}\,\!</math><br />
<br />
<br />
The fitted model can also be written as follows, using <math>\hat{\beta }={{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y\,\!</math>:<br />
<br />
<br />
::<math>\begin{align}<br />
\hat{y} &= & X\hat{\beta } \\ <br />
& = & X{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ <br />
& = & Hy <br />
\end{align}\,\!</math><br />
<br />
<br />
where <math>H=X{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}\,\!</math>. The matrix, <math>H\,\!</math>, is referred to as the hat matrix. It transforms the vector of the observed response values, <math>y\,\!</math>, to the vector of fitted values, <math>\hat{y}\,\!</math>.<br />
<br />
===Example===<br />
An analyst studying a chemical process expects the yield to be affected by the levels of two factors, <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math>. Observations recorded for various levels of the two factors are shown in the following table. The analyst wants to fit a first order regression model to the data. Interaction between <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math> is not expected based on knowledge of similar processes. Units of the factor levels and the yield are ignored for the analysis.<br />
<br />
<br />
[[Image:doet5.1.png||center|351px|Observed yield data for various levels of two factors.|link=]]<br />
<br />
<br />
The data of the above table can be entered into the Weibull++ DOE folio using the multiple linear regression folio tool as shown in the following figure. <br />
<br />
<br />
[[Image:doe5_7.png|center|618px|Multiple Regression tool in Webibull++ with the data in the table.|link=]]<br />
<br />
<br />
A scatter plot for the data is shown next. <br />
<br />
<br />
[[Image:doe5.8.png|center|361px|Three-dimensional scatter plot for the observed data in the table.|link=]]<br />
<br />
<br />
The first order regression model applicable to this data set having two predictor variables is:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
where the dependent variable, <math>Y\,\!</math>, represents the yield and the predictor variables, <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math>, represent the two factors respectively. The <math>X\,\!</math> and <math>y\,\!</math> matrices for the data can be obtained as: <br />
<br />
<br />
::<math>X=\left[ \begin{matrix}<br />
1 & 41.9 & 29.1 \\<br />
1 & 43.4 & 29.3 \\<br />
. & . & . \\<br />
. & . & . \\<br />
. & . & . \\<br />
1 & 77.8 & 32.9 \\<br />
\end{matrix} \right]\text{ }y=\left[ \begin{matrix}<br />
251.3 \\<br />
251.3 \\<br />
. \\<br />
. \\<br />
. \\<br />
349.0 \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
The least square estimates, <math>\hat{\beta }\,\!</math>, can now be obtained:<br />
<br />
<br />
::<math>\begin{align}<br />
\hat{\beta } &= & {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ <br />
& = & {{\left[ \begin{matrix}<br />
17 & 941 & 525.3 \\<br />
941 & 54270 & 29286 \\<br />
525.3 & 29286 & 16254 \\<br />
\end{matrix} \right]}^{-1}}\left[ \begin{matrix}<br />
4902.8 \\<br />
276610 \\<br />
152020 \\<br />
\end{matrix} \right] \\ <br />
& = & \left[ \begin{matrix}<br />
-153.51 \\<br />
1.24 \\<br />
12.08 \\<br />
\end{matrix} \right] <br />
\end{align}\,\!</math><br />
<br />
<br />
:Thus:<br />
<br />
<br />
::<math>\hat{\beta }=\left[ \begin{matrix}<br />
{{{\hat{\beta }}}_{0}} \\<br />
{{{\hat{\beta }}}_{1}} \\<br />
{{{\hat{\beta }}}_{2}} \\<br />
\end{matrix} \right]=\left[ \begin{matrix}<br />
-153.51 \\<br />
1.24 \\<br />
12.08 \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
and the estimated regression coefficients are <math>{{\hat{\beta }}_{0}}=-153.51\,\!</math>, <math>{{\hat{\beta }}_{1}}=1.24\,\!</math> and <math>{{\hat{\beta }}_{2}}=12.08\,\!</math>. The fitted regression model is:<br />
<br />
<br />
::<math>\begin{align}<br />
\hat{y} & = & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{1}}+{{{\hat{\beta }}}_{2}}{{x}_{2}} \\ <br />
& = & -153.5+1.24{{x}_{1}}+12.08{{x}_{2}} <br />
\end{align}\,\!</math><br />
<br />
<br />
The fitted regression model can be viewed in the Weibull++ DOE folio, as shown next.<br />
<br />
<br />
[[Image:doe5_9.png|center|434px|Equation of the fitted regression model for the data from the table.|link=]]<br />
<br />
<br />
A plot of the fitted regression plane is shown in the following figure. <br />
<br />
<br />
[[Image:doe5.10.png|center|362px|Fitted regression plane <math>\hat{y}=-153.5+1.24 x_1+12.08 x_2\,\!</math> for the data from the table.|link=]]<br />
<br />
<br />
The fitted regression model can be used to obtain fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, corresponding to an observed response value, <math>{{y}_{i}}\,\!</math>. For example, the fitted value corresponding to the fifth observation is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{i}} &= & -153.5+1.24{{x}_{i1}}+12.08{{x}_{i2}} \\ <br />
{{{\hat{y}}}_{5}} & = & -153.5+1.24{{x}_{51}}+12.08{{x}_{52}} \\ <br />
& = & -153.5+1.24(47.3)+12.08(29.9) \\ <br />
& = & 266.3 <br />
\end{align}\,\!</math><br />
<br />
<br />
The observed fifth response value is <math>{{y}_{5}}=273.0\,\!</math>. The residual corresponding to this value is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{e}_{i}} & = & {{y}_{i}}-{{{\hat{y}}}_{i}} \\ <br />
{{e}_{5}}& = & {{y}_{5}}-{{{\hat{y}}}_{5}} \\ <br />
& = & 273.0-266.3 \\ <br />
& = & 6.7 <br />
\end{align}\,\!</math><br />
<br />
<br />
In Weibull++ DOE folios, fitted values and residuals are shown in the Diagnostic Information table of the detailed summary of results. The values are shown in the following figure. <br />
<br />
<br />
[[Image:doe5_11.png|center|886px|Fitted values and residuals for the data in the table.|link=]]<br />
<br />
<br />
The fitted regression model can also be used to predict response values. For example, to obtain the response value for a new observation corresponding to 47 units of <math>{{x}_{1}}\,\!</math> and 31 units of <math>{{x}_{2}}\,\!</math>, the value is calculated using:<br />
<br />
<br />
::<math>\begin{align}<br />
\hat{y}(47,31)& = & -153.5+1.24(47)+12.08(31) \\ <br />
& = & 279.26 <br />
\end{align}\,\!</math><br />
<br />
===Properties of the Least Square Estimators for Beta===<br />
The least square estimates, <math>{{\hat{\beta }}_{0}}\,\!</math>, <math>{{\hat{\beta }}_{1}}\,\!</math>, <math>{{\hat{\beta }}_{2}}\,\!</math>... <math>{{\hat{\beta }}_{k}}\,\!</math>, are unbiased estimators of <math>{{\beta }_{0}}\,\!</math>, <math>{{\beta }_{1}}\,\!</math>, <math>{{\beta }_{2}}\,\!</math>... <math>{{\beta }_{k}}\,\!</math>, provided that the random error terms, <math>{{\epsilon }_{i}}\,\!</math>, are normally and independently distributed. The variances of the <math>\hat{\beta }\,\!</math> s are obtained using the <math>{{({{X}^{\prime }}X)}^{-1}}\,\!</math> matrix. The variance-covariance matrix of the estimated regression coefficients is obtained as follows:<br />
<br />
<br />
::<math>C={{\hat{\sigma }}^{2}}{{({{X}^{\prime }}X)}^{-1}}\,\!</math><br />
<br />
<br />
<math>C\,\!</math> is a symmetric matrix whose diagonal elements, <math>{{C}_{jj}}\,\!</math>, represent the variance of the estimated <math>j\,\!</math> th regression coefficient, <math>{{\hat{\beta }}_{j}}\,\!</math>. The off-diagonal elements, <math>{{C}_{ij}}\,\!</math>, represent the covariance between the <math>i\,\!</math> th and <math>j\,\!</math> th estimated regression coefficients, <math>{{\hat{\beta }}_{i}}\,\!</math> and <math>{{\hat{\beta }}_{j}}\,\!</math>. The value of <math>{{\hat{\sigma }}^{2}}\,\!</math> is obtained using the error mean square, <math>M{{S}_{E}}\,\!</math>. The variance-covariance matrix for the data in the table (see [[Multiple_Linear_Regression_Analysis#Estimating_Regression_Models_Using_Least_Squares| Estimating Regression Models Using Least Squares]]) can be viewed in the DOE folio, as shown next.<br />
<br />
<br />
[[Image:doe5_12.png|center|619px|The variance-covariance matrix for the data in table.|link=]]<br />
<br />
<br />
Calculations to obtain the matrix are given in this [[Multiple_Linear_Regression_Analysis#Example| example]]. The positive square root of <math>{{C}_{jj}}\,\!</math> represents the estimated standard deviation of the <math>j\,\!</math> th regression coefficient, <math>{{\hat{\beta }}_{j}}\,\!</math>, and is called the estimated standard error of <math>{{\hat{\beta }}_{j}}\,\!</math> (abbreviated <math>se({{\hat{\beta }}_{j}})\,\!</math> ).<br />
<br />
<br />
::<math>se({{\hat{\beta }}_{j}})=\sqrt{{{C}_{jj}}}\,\!</math><br />
<br />
==Hypothesis Tests in Multiple Linear Regression==<br />
This section discusses hypothesis tests on the regression coefficients in multiple linear regression. As in the case of simple linear regression, these tests can only be carried out if it can be assumed that the random error terms, <math>{{\epsilon }_{i}}\,\!</math>, are normally and independently distributed with a mean of zero and variance of <math>{{\sigma }^{2}}\,\!</math>.<br />
Three types of hypothesis tests can be carried out for multiple linear regression models:<br />
<br />
#Test for significance of regression: This test checks the significance of the whole regression model.<br />
#<math>t\,\!</math> test: This test checks the significance of individual regression coefficients.<br />
#<math>F\,\!</math> test: This test can be used to simultaneously check the significance of a number of regression coefficients. It can also be used to test individual coefficients.<br />
<br />
===Test for Significance of Regression===<br />
The test for significance of regression in the case of multiple linear regression analysis is carried out using the analysis of variance. The test is used to check if a linear statistical relationship exists between the response variable and at least one of the predictor variables. The statements for the hypotheses are:<br />
<br />
<br />
::<math>\begin{align}<br />
& {{H}_{0}}:& {{\beta }_{1}}={{\beta }_{2}}=...={{\beta }_{k}}=0 \\ <br />
& {{H}_{1}}:& {{\beta }_{j}}\ne 0\text{ for at least one }j <br />
\end{align}\,\!</math><br />
<br />
<br />
The test for <math>{{H}_{0}}\,\!</math> is carried out using the following statistic:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{M{{S}_{R}}}{M{{S}_{E}}}\,\!</math><br />
<br />
<br />
where <math>M{{S}_{R}}\,\!</math> is the regression mean square and <math>M{{S}_{E}}\,\!</math> is the error mean square. If the null hypothesis, <math>{{H}_{0}}\,\!</math>, is true then the statistic <math>{{F}_{0}}\,\!</math> follows the <math>F\,\!</math> distribution with <math>k\,\!</math> degrees of freedom in the numerator and <math>n-\,\!</math> ( <math>k+1\,\!</math> ) degrees of freedom in the denominator. The null hypothesis, <math>{{H}_{0}}\,\!</math>, is rejected if the calculated statistic, <math>{{F}_{0}}\,\!</math>, is such that:<br />
<br />
<br />
::<math>{{F}_{0}}>{{f}_{\alpha ,k,n-(k+1)}}\,\!</math><br />
<br />
<br />
====Calculation of the Statistic <math>{{F}_{0}}\,\!</math>====<br />
To calculate the statistic <math>{{F}_{0}}\,\!</math>, the mean squares <math>M{{S}_{R}}\,\!</math> and <math>M{{S}_{E}}\,\!</math> must be known. As explained in [http://reliawiki.com/index.php/Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis], the mean squares are obtained by dividing the sum of squares by their degrees of freedom. For example, the total mean square, <math>M{{S}_{T}}\,\!</math>, is obtained as follows:<br />
<br />
<br />
::<math>M{{S}_{T}}=\frac{S{{S}_{T}}}{dof(S{{S}_{T}})}\,\!</math><br />
<br />
<br />
where <math>S{{S}_{T}}\,\!</math> is the total sum of squares and <math>dof(S{{S}_{T}})\,\!</math> is the number of degrees of freedom associated with <math>S{{S}_{T}}\,\!</math>. In multiple linear regression, the following equation is used to calculate <math>S{{S}_{T}}\,\!</math> : <br />
<br />
<br />
::<math>S{{S}_{T}}={{y}^{\prime }}\left[ I-(\frac{1}{n})J \right]y\,\!</math><br />
<br />
<br />
where <math>n\,\!</math> is the total number of observations, <math>y\,\!</math> is the vector of observations (that was defined in [http://reliawiki.com/index.php/Multiple_Linear_Regression_Analysis#Estimating_Regression_Models_Using_Least_Squares| Estimating Regression Models Using Least Squares]), <math>I\,\!</math> is the identity matrix of order <math>n\,\!</math> and <math>J\,\!</math> represents an <math>n\times n\,\!</math> square matrix of ones. The number of degrees of freedom associated with <math>S{{S}_{T}}\,\!</math>, <math>dof(S{{S}_{T}})\,\!</math>, is ( <math>n-1\,\!</math> ). Knowing <math>S{{S}_{T}}\,\!</math> and <math>dof(S{{S}_{T}})\,\!</math> the total mean square, <math>M{{S}_{T}}\,\!</math>, can be calculated.<br />
<br />
The regression mean square, <math>M{{S}_{R}}\,\!</math>, is obtained by dividing the regression sum of squares, <math>S{{S}_{R}}\,\!</math>, by the respective degrees of freedom, <math>dof(S{{S}_{R}})\,\!</math>, as follows:<br />
<br />
<br />
::<math>M{{S}_{R}}=\frac{S{{S}_{R}}}{dof(S{{S}_{R}})}\,\!</math><br />
<br />
<br />
The regression sum of squares, <math>S{{S}_{R}}\,\!</math>, is calculated using the following equation:<br />
<br />
<br />
::<math>S{{S}_{R}}={{y}^{\prime }}\left[ H-(\frac{1}{n})J \right]y\,\!</math><br />
<br />
<br />
where <math>n\,\!</math> is the total number of observations, <math>y\,\!</math> is the vector of observations, <math>H\,\!</math> is the hat matrix and <math>J\,\!</math> represents an <math>n\times n\,\!</math> square matrix of ones. The number of degrees of freedom associated with <math>S{{S}_{R}}\,\!</math>, <math>dof(S{{S}_{E}})\,\!</math>, is <math>k\,\!</math>, where <math>k\,\!</math> is the number of predictor variables in the model. Knowing <math>S{{S}_{R}}\,\!</math> and <math>dof(S{{S}_{R}})\,\!</math> the regression mean square, <math>M{{S}_{R}}\,\!</math>, can be calculated.<br />
The error mean square, <math>M{{S}_{E}}\,\!</math>, is obtained by dividing the error sum of squares, <math>S{{S}_{E}}\,\!</math>, by the respective degrees of freedom, <math>dof(S{{S}_{E}})\,\!</math>, as follows:<br />
<br />
<br />
::<math>M{{S}_{E}}=\frac{S{{S}_{E}}}{dof(S{{S}_{E}})}\,\!</math><br />
<br />
<br />
The error sum of squares, <math>S{{S}_{E}}\,\!</math>, is calculated using the following equation:<br />
<br />
<br />
::<math>S{{S}_{E}}={{y}^{\prime }}(I-H)y\,\!</math><br />
<br />
<br />
where <math>y\,\!</math> is the vector of observations, <math>I\,\!</math> is the identity matrix of order <math>n\,\!</math> and <math>H\,\!</math> is the hat matrix. The number of degrees of freedom associated with <math>S{{S}_{E}}\,\!</math>, <math>dof(S{{S}_{E}})\,\!</math>, is <math>n-(k+1)\,\!</math>, where <math>n\,\!</math> is the total number of observations and <math>k\,\!</math> is the number of predictor variables in the model. Knowing <math>S{{S}_{E}}\,\!</math> and <math>dof(S{{S}_{E}})\,\!</math>, the error mean square, <math>M{{S}_{E}}\,\!</math>, can be calculated. The error mean square is an estimate of the variance, <math>{{\sigma }^{2}}\,\!</math>, of the random error terms, <math>{{\epsilon }_{i}}\,\!</math>. <br />
<br />
<br />
::<math>{{\hat{\sigma }}^{2}}=M{{S}_{E}}\,\!</math><br />
<br />
=====Example=====<br />
The test for the significance of regression, for the regression model obtained for the data in the table (see [[Multiple_Linear_Regression_Analysis#Estimating_Regression_Models_Using_Least_Squares| Estimating Regression Models Using Least Squares]]), is illustrated in this example. The null hypothesis for the model is:<br />
<br />
<br />
::<math>{{H}_{0}}: {{\beta }_{1}}={{\beta }_{2}}=0\,\!</math><br />
<br />
<br />
The statistic to test <math>{{H}_{0}}\,\!</math> is:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{M{{S}_{R}}}{M{{S}_{E}}}\,\!</math><br />
<br />
<br />
To calculate <math>{{F}_{0}}\,\!</math>, first the sum of squares are calculated so that the mean squares can be obtained. Then the mean squares are used to calculate the statistic <math>{{F}_{0}}\,\!</math> to carry out the significance test.<br />
The regression sum of squares, <math>S{{S}_{R}}\,\!</math>, can be obtained as:<br />
<br />
<br />
::<math>S{{S}_{R}}={{y}^{\prime }}\left[ H-(\frac{1}{n})J \right]y\,\!</math><br />
<br />
<br />
The hat matrix, <math>H\,\!</math> is calculated as follows using the design matrix <math>X\,\!</math> from the previous [[Multiple_Linear_Regression_Analysis#Example| example]]:<br />
<br />
<br />
::<math>\begin{align}<br />
H & = & X{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }} \\ <br />
& = & \left[ \begin{matrix}<br />
0.27552 & 0.25154 & . & . & -0.04030 \\<br />
0.25154 & 0.23021 & . & . & -0.029120 \\<br />
. & . & . & . & . \\<br />
. & . & . & . & . \\<br />
-0.04030 & -0.02920 & . & . & 0.30115 \\<br />
\end{matrix} \right] <br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing <math>y\,\!</math>, <math>H\,\!</math> and <math>J\,\!</math>, the regression sum of squares, <math>S{{S}_{R}}\,\!</math>, can be calculated:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}} & = & {{y}^{\prime }}\left[ H-(\frac{1}{n})J \right]y \\ <br />
& = & 12816.35 <br />
\end{align}\,\!</math><br />
<br />
<br />
The degrees of freedom associated with <math>S{{S}_{R}}\,\!</math> is <math>k\,\!</math>, which equals to a value of two since there are two predictor variables in the data in the table (see [[Multiple_Linear_Regression_Analysis#Estimating_Regression_Models_Using_Least_Squares| Multiple Linear Regression Analysis]]). Therefore, the regression mean square is:<br />
<br />
<br />
::<math>\begin{align}<br />
M{{S}_{R}}& = & \frac{S{{S}_{R}}}{dof(S{{S}_{R}})} \\ <br />
& = & \frac{12816.35}{2} \\ <br />
& = & 6408.17 <br />
\end{align}\,\!</math><br />
<br />
<br />
Similarly to calculate the error mean square, <math>M{{S}_{E}}\,\!</math>, the error sum of squares, <math>S{{S}_{E}}\,\!</math>, can be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{E}} &= & {{y}^{\prime }}\left[ I-H \right]y \\ <br />
& = & 423.37 <br />
\end{align}\,\!</math><br />
<br />
<br />
The degrees of freedom associated with <math>S{{S}_{E}}\,\!</math> is <math>n-(k+1)\,\!</math>. Therefore, the error mean square, <math>M{{S}_{E}}\,\!</math>, is:<br />
<br />
<br />
::<math>\begin{align}<br />
M{{S}_{E}} &= & \frac{S{{S}_{E}}}{dof(S{{S}_{E}})} \\ <br />
& = & \frac{S{{S}_{E}}}{(n-(k+1))} \\ <br />
& = & \frac{423.37}{(17-(2+1))} \\ <br />
& = & 30.24 <br />
\end{align}\,\!</math><br />
<br />
<br />
The statistic to test the significance of regression can now be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}}& = & \frac{M{{S}_{R}}}{M{{S}_{E}}} \\ <br />
& = & \frac{6408.17}{423.37/(17-3)} \\ <br />
& = & 211.9 <br />
\end{align}\,\!</math><br />
<br />
<br />
The critical value for this test, corresponding to a significance level of 0.1, is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{\alpha ,k,n-(k+1)}} &= & {{f}_{0.1,2,14}} \\ <br />
& = & 2.726 <br />
\end{align}\,\!</math><br />
<br />
<br />
Since <math>{{f}_{0}}>{{f}_{0.1,2,14}}\,\!</math>, <math>{{H}_{0}}:\,\!</math> <math>{{\beta }_{1}}={{\beta }_{2}}=0\,\!</math> is rejected and it is concluded that at least one coefficient out of <math>{{\beta }_{1}}\,\!</math> and <math>{{\beta }_{2}}\,\!</math> is significant. In other words, it is concluded that a regression model exists between yield and either one or both of the factors in the table. The analysis of variance is summarized in the following table.<br />
<br />
<br />
[[Image:doet5.2.png|center|477px|ANOVA table for the significance of regression test.|link=]]<br />
<br />
===Test on Individual Regression Coefficients (''t'' Test)===<br />
The <math>t\,\!</math> test is used to check the significance of individual regression coefficients in the multiple linear regression model. Adding a significant variable to a regression model makes the model more effective, while adding an unimportant variable may make the model worse. The hypothesis statements to test the significance of a particular regression coefficient, <math>{{\beta }_{j}}\,\!</math>, are:<br />
<br />
<br />
::<math>\begin{align}<br />
& {{H}_{0}}: & {{\beta }_{j}}=0 \\ <br />
& {{H}_{1}}: & {{\beta }_{j}}\ne 0 <br />
\end{align}\,\!</math><br />
<br />
<br />
The test statistic for this test is based on the <math>t\,\!</math> distribution (and is similar to the one used in the case of simple linear regression models in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Anaysis]]):<br />
<br />
<br />
::<math>{{T}_{0}}=\frac{{{{\hat{\beta }}}_{j}}}{se({{{\hat{\beta }}}_{j}})}\,\!</math><br />
<br />
<br />
where the standard error, <math>se({{\hat{\beta }}_{j}})\,\!</math>, is obtained. The analyst would fail to reject the null hypothesis if the test statistic lies in the acceptance region:<br />
<br />
<br />
::<math>-{{t}_{\alpha /2,n-2}}<{{T}_{0}}<{{t}_{\alpha /2,n-2}}\,\!</math><br />
<br />
<br />
This test measures the contribution of a variable while the remaining variables are included in the model. For the model <math>\hat{y}={{\hat{\beta }}_{0}}+{{\hat{\beta }}_{1}}{{x}_{1}}+{{\hat{\beta }}_{2}}{{x}_{2}}+{{\hat{\beta }}_{3}}{{x}_{3}}\,\!</math>, if the test is carried out for <math>{{\beta }_{1}}\,\!</math>, then the test will check the significance of including the variable <math>{{x}_{1}}\,\!</math> in the model that contains <math>{{x}_{2}}\,\!</math> and <math>{{x}_{3}}\,\!</math> (i.e., the model <math>\hat{y}={{\hat{\beta }}_{0}}+{{\hat{\beta }}_{2}}{{x}_{2}}+{{\hat{\beta }}_{3}}{{x}_{3}}\,\!</math> ). Hence the test is also referred to as partial or marginal test. In DOE folios, this test is displayed in the Regression Information table.<br />
<br />
====Example====<br />
The test to check the significance of the estimated regression coefficients for the data is illustrated in this example. The null hypothesis to test the coefficient <math>{{\beta }_{2}}\,\!</math> is:<br />
<br />
<br />
::<math>{{H}_{0}}:{{\beta }_{2}}=0\,\!</math><br />
<br />
<br />
The null hypothesis to test <math>{{\beta }_{1}}\,\!</math> can be obtained in a similar manner. To calculate the test statistic, <math>{{T}_{0}}\,\!</math>, we need to calculate the standard error.<br />
In the [[Multiple_Linear_Regression_Analysis#Example_2|example]], the value of the error mean square, <math>M{{S}_{E}}\,\!</math>, was obtained as 30.24. The error mean square is an estimate of the variance, <math>{{\sigma }^{2}}\,\!</math>. <br />
<br />
<br />
:Therefore: <br />
<br />
<br />
::<math>\begin{align}<br />
{{{\hat{\sigma }}}^{2}} &= & M{{S}_{E}} \\ <br />
& = & 30.24 <br />
\end{align}\,\!</math><br />
<br />
<br />
The variance-covariance matrix of the estimated regression coefficients is:<br />
<br />
<br />
::<math>\begin{align}<br />
C &= & {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ <br />
& = & 30.24\left[ \begin{matrix}<br />
336.5 & 1.2 & -13.1 \\<br />
1.2 & 0.005 & -0.049 \\<br />
-13.1 & -0.049 & 0.5 \\<br />
\end{matrix} \right] \\ <br />
& = & \left[ \begin{matrix}<br />
10176.75 & 37.145 & -395.83 \\<br />
37.145 & 0.1557 & -1.481 \\<br />
-395.83 & -1.481 & 15.463 \\<br />
\end{matrix} \right] <br />
\end{align}\,\!</math><br />
<br />
<br />
From the diagonal elements of <math>C\,\!</math>, the estimated standard error for <math>{{\hat{\beta }}_{1}}\,\!</math> and <math>{{\hat{\beta }}_{2}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
se({{{\hat{\beta }}}_{1}}) &= & \sqrt{0.1557}=0.3946 \\ <br />
se({{{\hat{\beta }}}_{2}})& = & \sqrt{15.463}=3.93 <br />
\end{align}\,\!</math><br />
<br />
<br />
The corresponding test statistics for these coefficients are:<br />
<br />
<br />
::<math>\begin{align}<br />
{{({{t}_{0}})}_{{{{\hat{\beta }}}_{1}}}} &= & \frac{{{{\hat{\beta }}}_{1}}}{se({{{\hat{\beta }}}_{1}})}=\frac{1.24}{0.3946}=3.1393 \\ <br />
{{({{t}_{0}})}_{{{{\hat{\beta }}}_{2}}}} &= & \frac{{{{\hat{\beta }}}_{2}}}{se({{{\hat{\beta }}}_{2}})}=\frac{12.08}{3.93}=3.0726 <br />
\end{align}\,\!</math><br />
<br />
<br />
The critical values for the present <math>t\,\!</math> test at a significance of 0.1 are:<br />
<br />
<br />
::<math>\begin{align}<br />
{{t}_{\alpha /2,n-(k+1)}} &= & {{t}_{0.05,14}}=1.761 \\ <br />
-{{t}_{\alpha /2,n-(k+1)}} & = & -{{t}_{0.05,14}}=-1.761 <br />
\end{align}\,\!</math><br />
<br />
<br />
Considering <math>{{\hat{\beta }}_{2}}\,\!</math>, it can be seen that <math>{{({{t}_{0}})}_{{{{\hat{\beta }}}_{2}}}}\,\!</math> does not lie in the acceptance region of <math>-{{t}_{0.05,14}}<{{t}_{0}}<{{t}_{0.05,14}}\,\!</math>. The null hypothesis, <math>{{H}_{0}}:{{\beta }_{2}}=0\,\!</math>, is rejected and it is concluded that <math>{{\beta }_{2}}\,\!</math> is significant at <math>\alpha =0.1\,\!</math>. This conclusion can also be arrived at using the <math>p\,\!</math> value noting that the hypothesis is two-sided. The <math>p\,\!</math> value corresponding to the test statistic, <math>{{({{t}_{0}})}_{{{{\hat{\beta }}}_{2}}}} = 3.0726\,\!</math>, based on the <math>t\,\!</math> distribution with 14 degrees of freedom is:<br />
<br />
<br />
::<math>\begin{align}<br />
p\text{ }value & = & 2\times (1-P(T\le |{{t}_{0}}|) \\ <br />
& = & 2\times (1-0.9959) \\ <br />
& = & 0.0083 <br />
\end{align}\,\!</math><br />
<br />
<br />
Since the <math>p\,\!</math> value is less than the significance, <math>\alpha =0.1\,\!</math>, it is concluded that <math>{{\beta }_{2}}\,\!</math> is significant. The hypothesis test on <math>{{\beta }_{1}}\,\!</math> can be carried out in a similar manner.<br />
<br />
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]], in DOE folios, the information related to the <math>t\,\!</math> test is displayed in the Regression Information table as shown in the figure below. <br />
<br />
<br />
[[Image:doe5_13.png|center|884px|Regression results for the data.|link=]]<br />
<br />
<br />
In this table, the <math>t\,\!</math> test for <math>{{\beta }_{2}}\,\!</math> is displayed in the row for the term Factor 2 because <math>{{\beta }_{2}}\,\!</math> is the coefficient that represents this factor in the regression model. Columns labeled Standard Error, T Value and P Value represent the standard error, the test statistic for the <math>t\,\!</math> test and the <math>p\,\!</math> value for the <math>t\,\!</math> test, respectively. These values have been calculated for <math>{{\beta }_{2}}\,\!</math> in this example. The Coefficient column represents the estimate of regression coefficients. These values are calculated as shown in [[Multiple_Linear_Regression_Analysis#Example|this]] example. The Effect column represents values obtained by multiplying the coefficients by a factor of 2. This value is useful in the case of two factor experiments and is explained in [[Two_Level_Factorial_Experiments| Two-Level Factorial Experiments]]. Columns labeled Low Confidence and High Confidence represent the limits of the confidence intervals for the regression coefficients and are explained in [[Multiple_Linear_Regression_Analysis#Confidence_Intervals_in_Multiple_Linear_Regression|Confidence Intervals in Multiple Linear Regression]]. The Variance Inflation Factor column displays values that give a measure of ''multicollinearity''. This is explained in [[Multiple_Linear_Regression_Analysis#Multicollinearity|Multicollinearity]].<br />
<br />
===Test on Subsets of Regression Coefficients (Partial ''F'' Test)===<br />
<br />
This test can be considered to be the general form of the <math>t\,\!</math> test mentioned in the previous section. This is because the test simultaneously checks the significance of including many (or even one) regression coefficients in the multiple linear regression model. Adding a variable to a model increases the regression sum of squares, <math>S{{S}_{R}}\,\!</math>. The test is based on this increase in the regression sum of squares. The increase in the regression sum of squares is called the ''extra sum of squares''. <br />
Assume that the vector of the regression coefficients, <math>\beta\,\!</math>, for the multiple linear regression model, <math>y=X\beta +\epsilon\,\!</math>, is partitioned into two vectors with the second vector, <math>{{\theta}_{2}}\,\!</math>, containing the last <math>r\,\!</math> regression coefficients, and the first vector, <math>{{\theta}_{1}}\,\!</math>, containing the first ( <math>k+1-r\,\!</math> ) coefficients as follows:<br />
<br />
<br />
::<math>\beta =\left[ \begin{matrix}<br />
{{\theta}_{1}} \\<br />
{{\theta}_{2}} \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
:with:<br />
<br />
<br />
::<math>{{\theta}_{1}}=[{{\beta }_{0}},{{\beta }_{1}}...{{\beta }_{k-r}}{]}'\text{ and }{{\theta}_{2}}=[{{\beta }_{k-r+1}},{{\beta }_{k-r+2}}...{{\beta }_{k}}{]}'\text{ }\,\!</math><br />
<br />
<br />
The hypothesis statements to test the significance of adding the regression coefficients in <math>{{\theta}_{2}}\,\!</math> to a model containing the regression coefficients in <math>{{\theta}_{1}}\,\!</math> may be written as:<br />
<br />
<br />
::<math>\begin{align}<br />
& {{H}_{0}}: & {{\theta}_{2}}=0 \\ <br />
& {{H}_{1}}: & {{\theta}_{2}}\ne 0 <br />
\end{align}\,\!</math><br />
<br />
<br />
The test statistic for this test follows the <math>F\,\!</math> distribution and can be calculated as follows:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{S{{S}_{R}}({{\theta}_{2}}|{{\theta}_{1}})/r}{M{{S}_{E}}}\,\!</math><br />
<br />
<br />
where <math>S{{S}_{R}}({{\theta}_{2}}|{{\theta}_{1}})\,\!</math> is the the increase in the regression sum of squares when the variables corresponding to the coefficients in <math>{{\theta}_{2}}\,\!</math> are added to a model already containing <math>{{\theta}_{1}}\,\!</math>, and <math>M{{S}_{E}}\,\!</math> is obtained from the equation given in [[Simple_Linear_Regression_Analysis#Mean_Squares|Simple Linear Regression Analysis]]. The value of the extra sum of squares is obtained as explained in the next section.<br />
<br />
The null hypothesis, <math>{{H}_{0}}\,\!</math>, is rejected if <math>{{F}_{0}}>{{f}_{\alpha ,r,n-(k+1)}}\,\!</math>. Rejection of <math>{{H}_{0}}\,\!</math> leads to the conclusion that at least one of the variables in <math>{{x}_{k-r+1}}\,\!</math>, <math>{{x}_{k-r+2}}\,\!</math>... <math>{{x}_{k}}\,\!</math> contributes significantly to the regression model. In a DOE folio, the results from the partial <math>F\,\!</math> test are displayed in the ANOVA table.<br />
<br />
[[Image:doe5_14.png|center|650px|ANOVA Table for Extra Sum of Squares in Weibull++.]]<br />
<br />
===Types of Extra Sum of Squares===<br />
The extra sum of squares can be calculated using either the partial (or adjusted) sum of squares or the sequential sum of squares. The type of extra sum of squares used affects the calculation of the test statistic for the partial <math>F\,\!</math> test described above. In DOE folios, selection for the type of extra sum of squares is available as shown in the figure below. The partial sum of squares is used as the default setting. The reason for this is explained in the following section on the partial sum of squares. <br />
<br />
<br />
====Partial Sum of Squares====<br />
The partial sum of squares for a term is the extra sum of squares when all terms, except the term under consideration, are included in the model. For example, consider the model:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
The sum of squares of regression of this model is denoted by <math>S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}})\,\!</math>. Assume that we need to know the partial sum of squares for <math>{{\beta }_{2}}\,\!</math>. The partial sum of squares for <math>{{\beta }_{2}}\,\!</math> is the increase in the regression sum of squares when <math>{{\beta }_{2}}\,\!</math> is added to the model. This increase is the difference in the regression sum of squares for the full model of the equation given above and the model that includes all terms except <math>{{\beta }_{2}}\,\!</math>. These terms are <math>{{\beta }_{0}}\,\!</math>, <math>{{\beta }_{1}}\,\!</math> and <math>{{\beta }_{12}}\,\!</math>. The model that contains these terms is:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
The sum of squares of regression of this model is denoted by <math>S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}})\,\!</math>. The partial sum of squares for <math>{{\beta }_{2}}\,\!</math>can be represented as <math>S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}})\,\!</math> and is calculated as follows:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}})& = & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}})-S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}}) <br />
\end{align}\,\!</math><br />
<br />
<br />
For the present case, <math>{{\theta}_{2}}=[{{\beta }_{2}}{]}'\,\!</math> and <math>{{\theta}_{1}}=[{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}}{]}'\,\!</math>. It can be noted that for the partial sum of squares <math>{{\beta }_{1}}\,\!</math> contains all coefficients other than the coefficient being tested.<br />
<br />
A Weibull++ DOE folio has the partial sum of squares as the default selection. This is because the <math>t\,\!</math> test is a partial test, i.e., the <math>t\,\!</math> test on an individual coefficient is carried by assuming that all the remaining coefficients are included in the model (similar to the way the partial sum of squares is calculated). The results from the <math>t\,\!</math> test are displayed in the Regression Information table. The results from the partial <math>F\,\!</math> test are displayed in the ANOVA table. To keep the results in the two tables consistent with each other, the partial sum of squares is used as the default selection for the results displayed in the ANOVA table.<br />
The partial sum of squares for all terms of a model may not add up to the regression sum of squares for the full model when the regression coefficients are correlated. If it is preferred that the extra sum of squares for all terms in the model always add up to the regression sum of squares for the full model then the sequential sum of squares should be used.<br />
<br />
=====Example=====<br />
This example illustrates the <math>F\,\!</math> test using the partial sum of squares. The test is conducted for the coefficient <math>{{\beta }_{1}}\,\!</math> corresponding to the predictor variable <math>{{x}_{1}}\,\!</math> for the data. The regression model used for this data set in the [[Multiple_Linear_Regression_Analysis#Example| example]] is:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
The null hypothesis to test the significance of <math>{{\beta }_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>{{H}_{0}}: {{\beta }_{1}}=0\,\!</math><br />
<br />
<br />
The statistic to test this hypothesis is:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{S{{S}_{R}}({{\beta }_{1}}|{{\beta }_{2}})/r}{M{{S}_{E}}}\,\!</math><br />
<br />
<br />
where <math>S{{S}_{R}}({{\beta }_{1}}|{{\beta }_{2}})\,\!</math> represents the partial sum of squares for <math>{{\beta }_{1}}\,\!</math>, <math>r\,\!</math> represents the number of degrees of freedom for <math>S{{S}_{R}}({{\beta }_{1}}|{{\beta }_{2}})\,\!</math> (which is one because there is just one coefficient, <math>{{\beta }_{1}}\,\!</math>, being tested) and <math>M{{S}_{E}}\,\!</math> is the error mean square and has been calculated in the second [[Multiple_Linear_Regression_Analysis#Example_2|example]] as 30.24. <br />
<br />
The partial sum of squares for <math>{{\beta }_{1}}\,\!</math> is the difference between the regression sum of squares for the full model, <math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math>, and the regression sum of squares for the model excluding <math>{{\beta }_{1}}\,\!</math>, <math>Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math>. The regression sum of squares for the full model has been calculated in the second [[Multiple_Linear_Regression_Analysis#Example_2|example]] as 12816.35. Therefore:<br />
<br />
<br />
::<math>S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}})=12816.35\,\!</math><br />
<br />
<br />
The regression sum of squares for the model <math>Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math> is obtained as shown next. First the design matrix for this model, <math>{{X}_{{{\beta }_{0}},{{\beta }_{2}}}}\,\!</math>, is obtained by dropping the second column in the design matrix of the full model, <math>X\,\!</math> (the full design matrix, <math>X\,\!</math>, was obtained in the [[Multiple_Linear_Regression_Analysis#Example| example]]). The second column of <math>X\,\!</math> corresponds to the coefficient <math>{{\beta }_{1}}\,\!</math> which is no longer in the model. Therefore, the design matrix for the model, <math>Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math>, is:<br />
<br />
<br />
::<math>{{X}_{{{\beta }_{0}},{{\beta }_{2}}}}=\left[ \begin{matrix}<br />
1 & 29.1 \\<br />
1 & 29.3 \\<br />
. & . \\<br />
. & . \\<br />
1 & 32.9 \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
The hat matrix corresponding to this design matrix is <math>{{H}_{{{\beta }_{0}},{{\beta }_{2}}}}\,\!</math>. It can be calculated using <math>{{H}_{{{\beta }_{0}},{{\beta }_{2}}}}={{X}_{{{\beta }_{0}},{{\beta }_{2}}}}{{(X_{{{\beta }_{0}},{{\beta }_{2}}}^{\prime }{{X}_{{{\beta }_{0}},{{\beta }_{2}}}})}^{-1}}X_{{{\beta }_{0}},{{\beta }_{2}}}^{\prime }\,\!</math>. Once <math>{{H}_{{{\beta }_{0}},{{\beta }_{2}}}}\,\!</math> is known, the regression sum of squares for the model <math>Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math>, can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}({{\beta }_{0}},{{\beta }_{2}}) & = & {{y}^{\prime }}\left[ {{H}_{{{\beta }_{0}},{{\beta }_{2}}}}-(\frac{1}{n})J \right]y \\ <br />
& = & 12518.32 <br />
\end{align}\,\!</math><br />
<br />
<br />
Therefore, the partial sum of squares for <math>{{\beta }_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}({{\beta }_{1}}|{{\beta }_{2}})& = & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}})-S{{S}_{R}}({{\beta }_{0}},{{\beta }_{2}}) \\ <br />
& = & 12816.35-12518.32 \\ <br />
& = & 298.03 <br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing the partial sum of squares, the statistic to test the significance of <math>{{\beta }_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}} &= & \frac{S{{S}_{R}}({{\beta }_{1}}|{{\beta }_{2}})/r}{M{{S}_{E}}} \\ <br />
& = & \frac{298.03/1}{30.24} \\ <br />
& = & 9.855 <br />
\end{align}\,\!</math><br />
<br />
<br />
The <math>p\,\!</math> value corresponding to this statistic based on the <math>F\,\!</math> distribution with 1 degree of freedom in the numerator and 14 degrees of freedom in the denominator is: <br />
<br />
::<math>\begin{align}<br />
p\text{ }value &= & 1-P(F\le {{f}_{0}}) \\ <br />
& = & 1-0.9928 \\ <br />
& = & 0.0072 <br />
\end{align}\,\!</math><br />
<br />
<br />
Assuming that the desired significance is 0.1, since <math>p\,\!</math> value < 0.1, <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> is rejected and it can be concluded that <math>{{\beta }_{1}}\,\!</math> is significant. The test for <math>{{\beta }_{2}}\,\!</math> can be carried out in a similar manner. In the results obtained from the DOE folio, the calculations for this test are displayed in the ANOVA table as shown in the following figure. Note that the conclusion obtained in this example can also be obtained using the <math>t\,\!</math> test as explained in the [[Multiple_Linear_Regression_Analysis#Example_3|example]] in [[Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29|Test on Individual Regression Coefficients (t Test)]]. The ANOVA and Regression Information tables in the DOE folio represent two different ways to test for the significance of the variables included in the multiple linear regression model.<br />
<br />
====Sequential Sum of Squares====<br />
The sequential sum of squares for a coefficient is the extra sum of squares when coefficients are added to the model in a sequence. For example, consider the model:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}+\epsilon\,\!</math><br />
<br />
<br />
The sequential sum of squares for <math>{{\beta }_{13}}\,\!</math> is the increase in the sum of squares when <math>{{\beta }_{13}}\,\!</math> is added to the model observing the sequence of the equation given above. Therefore this extra sum of squares can be obtained by taking the difference between the regression sum of squares for the model after <math>{{\beta }_{13}}\,\!</math> was added and the regression sum of squares for the model before <math>{{\beta }_{13}}\,\!</math> was added to the model. The model after <math>{{\beta }_{13}}\,\!</math> is added is as follows:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+\epsilon\,\!</math><br />
<br />
<br />
This is because to maintain the sequence all coefficients preceding <math>{{\beta }_{13}}\,\!</math> must be included in the model. These are the coefficients <math>{{\beta }_{0}}\,\!</math>, <math>{{\beta }_{1}}\,\!</math>, <math>{{\beta }_{2}}\,\!</math>, <math>{{\beta }_{12}}\,\!</math> and <math>{{\beta }_{3}}\,\!</math>.<br />
Similarly the model before <math>{{\beta }_{13}}\,\!</math> is added must contain all coefficients of the equation given above except <math>{{\beta }_{13}}\,\!</math>. This model can be obtained as follows:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+\epsilon\,\!</math><br />
<br />
<br />
The sequential sum of squares for <math>{{\beta }_{13}}\,\!</math> can be calculated as follows:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}({{\beta }_{13}}|{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}}) & = & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}},{{\beta }_{13}})- S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}}) <br />
\end{align}\,\!</math><br />
<br />
<br />
For the present case, <math>{{\theta}_{2}}=[{{\beta }_{13}}{]}'\,\!</math> and <math>{{\theta}_{1}}=[{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}}{]}'\,\!</math>. It can be noted that for the sequential sum of squares <math>{{\beta }_{1}}\,\!</math> contains all coefficients proceeding the coefficient being tested.<br />
<br />
The sequential sum of squares for all terms will add up to the regression sum of squares for the full model, but the sequential sum of squares are order dependent.<br />
<br />
=====Example=====<br />
This example illustrates the partial <math>F\,\!</math> test using the sequential sum of squares. The test is conducted for the coefficient <math>{{\beta }_{1}}\,\!</math> corresponding to the predictor variable <math>{{x}_{1}}\,\!</math> for the data. The regression model used for this data set in the [[Multiple_Linear_Regression_Analysis#Example|example]] is:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon \,\!</math><br />
<br />
<br />
The null hypothesis to test the significance of <math>{{\beta }_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math><br />
<br />
<br />
The statistic to test this hypothesis is:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})/r}{M{{S}_{E}}}\,\!</math><br />
<br />
<br />
where <math>S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})\,\!</math> represents the sequential sum of squares for <math>{{\beta }_{1}}\,\!</math>, <math>r\,\!</math> represents the number of degrees of freedom for <math>S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})\,\!</math> (which is one because there is just one coefficient, <math>{{\beta }_{1}}\,\!</math>, being tested) and <math>M{{S}_{E}}\,\!</math> is the error mean square and has been calculated in the second [[Multiple_Linear_Regression_Analysis#Example_2|example]] as 30.24. <br />
<br />
The sequential sum of squares for <math>{{\beta }_{1}}\,\!</math> is the difference between the regression sum of squares for the model after adding <math>{{\beta }_{1}}\,\!</math>, <math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon\,\!</math>, and the regression sum of squares for the model before adding <math>{{\beta }_{1}}\,\!</math>, <math>Y={{\beta }_{0}}+\epsilon\,\!</math>.<br />
The regression sum of squares for the model <math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon\,\!</math> is obtained as shown next. First the design matrix for this model, <math>{{X}_{{{\beta }_{0}},{{\beta }_{1}}}}\,\!</math>, is obtained by dropping the third column in the design matrix for the full model, <math>X\,\!</math> (the full design matrix, <math>X\,\!</math>, was obtained in the [[Multiple_Linear_Regression_Analysis#Example|example]]). The third column of <math>X\,\!</math> corresponds to coefficient <math>{{\beta }_{2}}\,\!</math> which is no longer used in the present model. Therefore, the design matrix for the model, <math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon\,\!</math>, is:<br />
<br />
<br />
::<math>{{X}_{{{\beta }_{0}},{{\beta }_{1}}}}=\left[ \begin{matrix}<br />
1 & 41.9 \\<br />
1 & 43.4 \\<br />
. & . \\<br />
. & . \\<br />
1 & 77.8 \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
The hat matrix corresponding to this design matrix is <math>{{H}_{{{\beta }_{0}},{{\beta }_{1}}}}\,\!</math>. It can be calculated using <math>{{H}_{{{\beta }_{0}},{{\beta }_{1}}}}={{X}_{{{\beta }_{0}},{{\beta }_{1}}}}{{(X_{{{\beta }_{0}},{{\beta }_{1}}}^{\prime }{{X}_{{{\beta }_{0}},{{\beta }_{1}}}})}^{-1}}X_{{{\beta }_{0}},{{\beta }_{1}}}^{\prime }\,\!</math>. Once <math>{{H}_{{{\beta }_{0}},{{\beta }_{1}}}}\,\!</math> is known, the regression sum of squares for the model <math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon\,\!</math> can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})& = & {{y}^{\prime }}\left[ {{H}_{{{\beta }_{0}},{{\beta }_{1}}}}-(\frac{1}{n})J \right]y \\ <br />
& = & 12530.85 <br />
\end{align}\,\!</math><br />
<br />
<br />
[[Image:doe5_16.png|center|650px|Sequential sum of squares for the data.]] <br />
<br />
<br />
The regression sum of squares for the model <math>Y={{\beta }_{0}}+\epsilon\,\!</math> is equal to zero since this model does not contain any variables. Therefore:<br />
<br />
<br />
::<math>S{{S}_{R}}({{\beta }_{0}})=0\,\!</math><br />
<br />
<br />
The sequential sum of squares for <math>{{\beta }_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}({{\beta }_{1}}|{{\beta }_{0}}) &= & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})-S{{S}_{R}}({{\beta }_{0}}) \\ <br />
& = & 12530.85-0 \\ <br />
& = & 12530.85 <br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing the sequential sum of squares, the statistic to test the significance of <math>{{\beta }_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}} &= & \frac{S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})/r}{M{{S}_{E}}} \\ <br />
& = & \frac{12530.85/1}{30.24} \\ <br />
& = & 414.366 <br />
\end{align}\,\!</math><br />
<br />
<br />
The <math>p\,\!</math> value corresponding to this statistic based on the <math>F\,\!</math> distribution with 1 degree of freedom in the numerator and 14 degrees of freedom in the denominator is: <br />
<br />
<br />
::<math>\begin{align}<br />
p\text{ }value &= & 1-P(F\le {{f}_{0}}) \\ <br />
& = & 1-0.999999 \\ <br />
& = & 8.46\times {{10}^{-12}} <br />
\end{align}\,\!</math><br />
<br />
<br />
Assuming that the desired significance is 0.1, since <math>p\,\!</math> value < 0.1, <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> is rejected and it can be concluded that <math>{{\beta }_{1}}\,\!</math> is significant. The test for <math>{{\beta }_{2}}\,\!</math> can be carried out in a similar manner. This result is shown in the following figure.<br />
<br />
==Confidence Intervals in Multiple Linear Regression==<br />
<br />
Calculation of confidence intervals for multiple linear regression models are similar to those for simple linear regression models explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]].<br />
<br />
===Confidence Interval on Regression Coefficients===<br />
<br />
A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on the regression coefficient, <math>{{\beta }_{j}}\,\!</math>, is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{\beta }}_{j}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{C}_{jj}}}\,\!</math><br />
<br />
<br />
The confidence interval on the regression coefficients are displayed in the Regression Information table under the Low Confidence and High Confidence columns as shown in the following figure.<br />
<br />
<br />
<br />
[[Image:doe5_17.png|center|710px|Confidence interval for the fitted value corresponding to the fifth observation.|link=]]<br />
<br />
<br />
Confidence Interval on Fitted Values, <math>{{\hat{y}}_{i}}\,\!</math> <br />
A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on any fitted value, <math>{{\hat{y}}_{i}}\,\!</math>, is given by:<br />
<br />
<br />
::<math>{{\hat{y}}_{i}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{{\hat{\sigma }}}^{2}}x_{i}^{\prime }{{({{X}^{\prime }}X)}^{-1}}{{x}_{i}}}\,\!</math><br />
<br />
<br />
where: <br />
<br />
<br />
::<math>{{x}_{i}}=\left[ \begin{matrix}<br />
1 \\<br />
{{x}_{i1}} \\<br />
. \\<br />
. \\<br />
. \\<br />
{{x}_{ik}} \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
In the above [[Multiple_Linear_Regression_Analysis#Example| example]], the fitted value corresponding to the fifth observation was calculated as <math>{{\hat{y}}_{5}}=266.3\,\!</math>. The 90% confidence interval on this value can be obtained as shown in the figure below. The values of 47.3 and 29.9 used in the figure are the values of the predictor variables corresponding to the fifth observation the [[Multiple_Linear_Regression_Analysis#Example|table]].<br />
<br />
===Confidence Interval on New Observations===<br />
As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]], the confidence interval on a new observation is also referred to as the prediction interval. The prediction interval takes into account both the error from the fitted model and the error associated with future observations. A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on a new observation, <math>{{\hat{y}}_{p}}\,\!</math>, is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{y}}_{p}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{{\hat{\sigma }}}^{2}}(1+x_{p}^{\prime }{{({{X}^{\prime }}X)}^{-1}}{{x}_{p}})}\,\!</math><br />
<br />
<br />
where: <br />
<br />
<br />
::<math>{{x}_{p}}=\left[ \begin{matrix}<br />
1 \\<br />
{{x}_{p1}} \\<br />
. \\<br />
. \\<br />
. \\<br />
{{x}_{pk}} \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
<math>{{x}_{p1}}\,\!</math>,..., <math>{{x}_{pk}}\,\!</math> are the levels of the predictor variables at which the new observation, <math>{{\hat{y}}_{p}}\,\!</math>, needs to be obtained.<br />
<br />
<br />
In multiple linear regression, prediction intervals should only be obtained at the levels of the predictor variables where the regression model applies. In the case of multiple linear regression it is easy to miss this. Having values lying within the range of the predictor variables does not necessarily mean that the new observation lies in the region to which the model is applicable. For example, consider the next figure where the shaded area shows the region to which a two variable regression model is applicable. The point corresponding to <math>p\,\!</math> th level of first predictor variable, <math>{{x}_{1}}\,\!</math>, and <math>p\,\!</math> th level of the second predictor variable, <math>{{x}_{2}}\,\!</math>, does not lie in the shaded area, although both of these levels are within the range of the first and second predictor variables respectively. In this case, the regression model is not applicable at this point.<br />
<br />
<br />
[[Image:doe5.18.png|center|519px|Predicted values and region of model application in multiple linear regression.|link=]]<br />
<br />
==Measures of Model Adequacy==<br />
As in the case of simple linear regression, analysis of a fitted multiple linear regression model is important before inferences based on the model are undertaken. This section presents some techniques that can be used to check the appropriateness of the multiple linear regression model.<br />
<br />
===Coefficient of Multiple Determination, ''R''<sup>2</sup>===<br />
The coefficient of multiple determination is similar to the coefficient of determination used in the case of simple linear regression. It is defined as: <br />
<br />
<br />
::<math>\begin{align}<br />
{{R}^{2}} & = & \frac{S{{S}_{R}}}{S{{S}_{T}}} \\ <br />
& = & 1-\frac{S{{S}_{E}}}{S{{S}_{T}}} <br />
\end{align}\,\!</math><br />
<br />
<br />
<math>{{R}^{2}}\,\!</math> indicates the amount of total variability explained by the regression model. The positive square root of <math>{{R}^{2}}\,\!</math> is called the multiple correlation coefficient and measures the linear association between <math>Y\,\!</math> and the predictor variables, <math>{{x}_{1}}\,\!</math>, <math>{{x}_{2}}\,\!</math>... <math>{{x}_{k}}\,\!</math>.<br />
<br />
The value of <math>{{R}^{2}}\,\!</math> increases as more terms are added to the model, even if the new term does not contribute significantly to the model. An increase in the value of <math>{{R}^{2}}\,\!</math> cannot be taken as a sign to conclude that the new model is superior to the older model. A better statistic to use is the adjusted <math>{{R}^{2}}\,\!</math> statistic defined as follows: <br />
<br />
<br />
::<math>\begin{align}<br />
R_{adj}^{2} &= & 1-\frac{M{{S}_{E}}}{M{{S}_{T}}} \\ <br />
& = & 1-\frac{S{{S}_{E}}/(n-(k+1))}{S{{S}_{T}}/(n-1)} \\ <br />
& = & 1-(\frac{n-1}{n-(k+1)})(1-{{R}^{2}}) <br />
\end{align}\,\!</math><br />
<br />
<br />
The adjusted <math>{{R}^{2}}\,\!</math> only increases when significant terms are added to the model. Addition of unimportant terms may lead to a decrease in the value of <math>R_{adj}^{2}\,\!</math>.<br />
<br />
In a DOE folio, <math>{{R}^{2}}\,\!</math> and <math>R_{adj}^{2}\,\!</math> values are displayed as R-sq and R-sq(adj), respectively. Other values displayed along with these values are S, PRESS and R-sq(pred). As explained in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]], the value of S is the square root of the error mean square, <math>M{{S}_{E}}\,\!</math>, and represents the "standard error of the model."<br />
<br />
PRESS is an abbreviation for prediction error sum of squares. It is the error sum of squares calculated using the PRESS residuals in place of the residuals, <math>{{e}_{i}}\,\!</math>, in the equation for the error sum of squares. The PRESS residual, <math>{{e}_{(i)}}\,\!</math>, for a particular observation, <math>{{y}_{i}}\,\!</math>, is obtained by fitting the regression model to the remaining observations. Then the value for a new observation, <math>{{\hat{y}}_{p}}\,\!</math>, corresponding to the observation in question, <math>{{y}_{i}}\,\!</math>, is obtained based on the new regression model. The difference between <math>{{y}_{i}}\,\!</math> and <math>{{\hat{y}}_{p}}\,\!</math> gives <math>{{e}_{(i)}}\,\!</math>. The PRESS residual, <math>{{e}_{(i)}}\,\!</math>, can also be obtained using <math>{{h}_{ii}}\,\!</math>, the diagonal element of the hat matrix, <math>H\,\!</math>, as follows:<br />
<br />
<br />
::<math>{{e}_{(i)}}=\frac{{{e}_{i}}}{1-{{h}_{ii}}}\,\!</math><br />
<br />
<br />
<br />
R-sq(pred), also referred to as prediction <math>{{R}^{2}}\,\!</math>, is obtained using PRESS as shown next:<br />
<br />
<br />
::<math>R_{pred}^{2}=1-\frac{PRESS}{S{{S}_{T}}}\,\!</math><br />
<br />
<br />
The values of R-sq, R-sq(adj) and S are indicators of how well the regression model fits the observed data. The values of PRESS and R-sq(pred) are indicators of how well the regression model predicts new observations. For example, higher values of PRESS or lower values of R-sq(pred) indicate a model that predicts poorly. The figure below shows these values for the data. The values indicate that the regression model fits the data well and also predicts well.<br />
<br />
[[Image:doe5_19.png|center|650px|Coefficient of multiple determination and related results for the data.]]<br />
<br />
===Residual Analysis===<br />
Plots of residuals, <math>{{e}_{i}}\,\!</math>, similar to the ones discussed in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] for simple linear regression, are used to check the adequacy of a fitted multiple linear regression model. The residuals are expected to be normally distributed with a mean of zero and a constant variance of <math>{{\sigma }^{2}}\,\!</math>. In addition, they should not show any patterns or trends when plotted against any variable or in a time or run-order sequence. Residual plots may also be obtained using standardized and studentized residuals. Standardized residuals, <math>{{d}_{i}}\,\!</math>, are obtained using the following equation: <br />
<br />
<br />
::<math>\begin{align}<br />
{{d}_{i}}&= & \frac{{{e}_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}}} \\ <br />
& = & \frac{{{e}_{i}}}{\sqrt{M{{S}_{E}}}} <br />
\end{align}\,\!</math><br />
<br />
<br />
Standardized residuals are scaled so that the standard deviation of the residuals is approximately equal to one. This helps to identify possible outliers or unusual observations. However, standardized residuals may understate the true residual magnitude, hence studentized residuals, <math>{{r}_{i}}\,\!</math>, are used in their place. Studentized residuals are calculated as follows:<br />
<br />
<br />
::<math><br />
<br />
\begin{align}<br />
<br />
{{r}_{i}} & = & \frac{{{e}_{i}}}{\sqrt{{{{\hat{\sigma }}}^{2}}(1-{{h}_{ii}})}} \\ <br />
& = & \frac{{{e}_{i}}}{\sqrt{M{{S}_{E}}(1-{{h}_{ii}})}} <br />
<br />
\end{align}<br />
<br />
\,\!<br />
<br />
</math><br />
<br />
<br />
where <math>{{h}_{ii}}\,\!</math> is the <math>i\,\!</math> th diagonal element of the hat matrix, <math>H\,\!</math>. External studentized (or the studentized deleted) residuals may also be used. These residuals are based on the PRESS residuals mentioned in [[Multiple_Linear_Regression_Analysis#Coefficient_of_Multiple_Determination.2C_R2|Coefficient of Multiple Determination, ''R''<sup>2</sup>]]. The reason for using the external studentized residuals is that if the <math>i\,\!</math> th observation is an outlier, it may influence the fitted model. In this case, the residual <math>{{e}_{i}}\,\!</math> will be small and may not disclose that <math>i\,\!</math> th observation is an outlier. The external studentized residual for the <math>i\,\!</math> th observation, <math>{{t}_{i}}\,\!</math>, is obtained as follows:<br />
<br />
<br />
::<math>{{t}_{i}}={{e}_{i}}{{\left[ \frac{n-k}{S{{S}_{E}}(1-{{h}_{ii}})-e_{i}^{2}} \right]}^{0.5}}\,\!</math><br />
<br />
<br />
Residual values for the data are shown in the figure below. Standardized residual plots for the data are shown in next two figures. The Weibull++ DOE folio compares the residual values to the critical values on the <math>t\,\!</math> distribution for studentized and external studentized residuals. <br />
<br />
<br />
[[Image:doe5_20.png|center|877px|Residual values for the data.|link=]]<br />
<br />
<br />
[[Image:doe5_21.png|center|650px|Residual probability plot for the data.|link=]]<br />
<br />
<br />
For other residuals the normal distribution is used. For example, for the data, the critical values on the <math>t\,\!</math> distribution at a significance of 0.1 are <math>{{t}_{0.05,14}}=1.761\,\!</math> and <math>-{{t}_{0.05,14}}=-1.761\,\!</math> (as calculated in the [[Multiple_Linear_Regression_Analysis#Example_3|example]], [[Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29|Test on Individual Regression Coefficients (''t'' Test)]]). The studentized residual values corresponding to the 3rd and 17th observations lie outside the critical values. Therefore, the 3rd and 17th observations are outliers. This can also be seen on the residual plots in the next two figures.<br />
<br />
[[Image:doe5_22.png|center|650px|Residual versus fitted values plot for the data.|link=]]<br />
<br />
<br />
[[Image:doe5_23.png|center|650px|Residual versus run order plot for the data.|link=]]<br />
<br />
===Outlying ''x'' Observations===<br />
Residuals help to identify outlying <math>y\,\!</math> observations. Outlying <math>x\,\!</math> observations can be detected using leverage. Leverage values are the diagonal elements of the hat matrix, <math>{{h}_{ii}}\,\!</math>. The <math>{{h}_{ii}}\,\!</math> values always lie between 0 and 1. Values of <math>{{h}_{ii}}\,\!</math> greater than <math>2(k+1)/n\,\!</math> are considered to be indicators of outlying <math>x\,\!</math> observations. <br />
<br />
===Influential Observations Detection===<br />
Once an outlier is identified, it is important to determine if the outlier has a significant effect on the regression model. One measure to detect influential observations is Cook's distance measure which is computed as follows:<br />
<br />
<br />
::<math>{{D}_{i}}=\frac{r_{i}^{2}}{(k+1)}\left[ \frac{{{h}_{ii}}}{(1-{{h}_{ii}})} \right]\,\!</math><br />
<br />
<br />
To use Cook's distance measure, the <math>{{D}_{i}}\,\!</math> values are compared to percentile values on the <math>F\,\!</math> distribution with <math>(k+1,n-(k+1))\,\!</math> degrees of freedom. If the percentile value is less than 10 or 20 percent, then the <math>i\,\!</math> th case has little influence on the fitted values. However, if the percentile value is close to 50 percent or greater, the <math>i\,\!</math> th case is influential, and fitted values with and without the <math>i\,\!</math> th case will differ substantially.<br />
<br />
<br />
====Example====<br />
Cook's distance measure can be calculated as shown next. The distance measure is calculated for the first observation of the data. The remaining values along with the leverage values are shown in the figure below (displaying Leverage and Cook's distance measure for the data).<br />
<br />
<br />
[[Image:doe5_24.png|center|874px|Leverage and Cook's distance measure for the data.|link=]]<br />
<br />
<br />
The standardized residual corresponding to the first observation is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{r}_{1}} & = & \frac{{{e}_{1}}}{\sqrt{M{{S}_{E}}(1-{{h}_{11}})}} \\ <br />
& = & \frac{1.3127}{\sqrt{30.3(1-0.2755)}} \\ <br />
& = & 0.2804 <br />
\end{align}\,\!</math><br />
<br />
<br />
Cook's distance measure for the first observation can now be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
{{D}_{1}} & = & \frac{r_{1}^{2}}{(k+1)}\left[ \frac{{{h}_{11}}}{(1-{{h}_{11}})} \right] \\ <br />
& = & \frac{{{0.2804}^{2}}}{(2+1)}\left[ \frac{0.2755}{(1-0.2755)} \right] \\ <br />
& = & 0.01 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 50th percentile value for <math>{{F}_{3,14}}\,\!</math> is 0.83. Since all <math>{{D}_{i}}\,\!</math> values are less than this value there are no influential observations.<br />
<br />
===Lack-of-Fit Test===<br />
The lack-of-fit test for simple linear regression discussed in [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] may also be applied to multiple linear regression to check the appropriateness of the fitted response surface and see if a higher order model is required. Data for <math>m\,\!</math> replicates may be collected as follows for all <math>n\,\!</math> levels of the predictor variables:<br />
<br />
<br />
::<math>\begin{align}<br />
& & {{y}_{11}},{{y}_{12}},....,{{y}_{1m}}\text{ }m\text{ repeated observations at the first level } \\ <br />
& & {{y}_{21}},{{y}_{22}},....,{{y}_{2m}}\text{ }m\text{ repeated observations at the second level} \\ <br />
& & ... \\ <br />
& & {{y}_{i1}},{{y}_{i2}},....,{{y}_{im}}\text{ }m\text{ repeated observations at the }i\text{th level} \\ <br />
& & ... \\ <br />
& & {{y}_{n1}},{{y}_{n2}},....,{{y}_{nm}}\text{ }m\text{ repeated observations at the }n\text{th level } <br />
\end{align}\,\!</math><br />
<br />
<br />
The sum of squares due to pure error, <math>S{{S}_{PE}}\,\!</math>, can be obtained as discussed in the [[Simple_Linear_Regression_Analysis| Simple Linear Regression Analysis]] as:<br />
<br />
<br />
::<math>S{{S}_{PE}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,\underset{j=1}{\overset{m}{\mathop \sum }}\,{{({{y}_{ij}}-{{\bar{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> are:<br />
<br />
<br />
::<math> dof(S{S}_{PE}) = nm-n \,\! </math><br />
<br />
<br />
Knowing <math>S{{S}_{PE}}\,\!</math>, sum of squares due to lack-of-fit, <math>S{{S}_{LOF}}\,\!</math>, can be obtained as: <br />
<br />
<br />
::<math>S{{S}_{LOF}}=S{{S}_{E}}-S{{S}_{PE}}\,\!</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> are:<br />
<br />
<br />
::<math><br />
<br />
\begin{align}<br />
<br />
dof(S{{S}_{LOF}}) & = & dof(S{{S}_{E}})-dof(S{{S}_{PE}}) \\<br />
& = & n-(k+1)-(nm-n) <br />
<br />
\end{align}<br />
<br />
\,\!<br />
<br />
</math><br />
<br />
<br />
The test statistic for the lack-of-fit test is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{F}_{0}} & = & \frac{S{{S}_{LOF}}/dof(S{{S}_{LOF}})}{S{{S}_{PE}}/dof(S{{S}_{PE}})} \\ <br />
& = & \frac{M{{S}_{LOF}}}{M{{S}_{PE}}} <br />
\end{align}\,\!</math><br />
<br />
<br />
==Other Topics in Multiple Linear Regression==<br />
<br />
===Polynomial Regression Models===<br />
Polynomial regression models are used when the response is curvilinear. The equation shown next presents a second order polynomial regression model with one predictor variable:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{11}}x_{1}^{2}+\epsilon\,\!</math><br />
<br />
<br />
Usually, coded values are used in these models. Values of the variables are coded by centering or expressing the levels of the variable as deviations from the mean value of the variable and then scaling or dividing the deviations obtained by half of the range of the variable.<br />
<br />
<br />
::<math>coded\text{ }value=\frac{actual\text{ }value-mean}{half\text{ }of\text{ }range}\,\!</math><br />
<br />
<br />
The reason for using coded predictor variables is that many times <math>x\,\!</math> and <math>{{x}^{2}}\,\!</math> are highly correlated and, if uncoded values are used, there may be computational difficulties while calculating the <math>{{({{X}^{\prime }}X)}^{-1}}\,\!</math> matrix to obtain the estimates, <math>\hat{\beta }\,\!</math>, of the regression coefficients using the equation for the <math>F\,\!</math> distribution given in [[Statistical_Background_on_DOE#F_Distribution|Statistics Background on DOE]].<br />
<br />
===Qualitative Factors===<br />
The multiple linear regression model also supports the use of qualitative factors. For example, gender may need to be included as a factor in a regression model. One of the ways to include qualitative factors in a regression model is to employ indicator variables. Indicator variables take on values of 0 or 1. For example, an indicator variable may be used with a value of 1 to indicate female and a value of 0 to indicate male.<br />
<br />
<br />
::<math>{{x}_{1}}=\{\begin{array}{*{35}{l}}<br />
1\text{ Female} \\<br />
0\text{ Male} \\<br />
\end{array}\,\!</math><br />
<br />
<br />
In general ( <math>n-1\,\!</math> ) indicator variables are required to represent a qualitative factor with <math>n\,\!</math> levels. As an example, a qualitative factor representing three types of machines may be represented as follows using two indicator variables: <br />
<br />
<br />
::<math>\begin{align}<br />
{{x}_{1}} & = & 1,\text{ }{{x}_{2}} & = & 0\text{ Machine Type I} \\ <br />
{{x}_{1}} & = & 0,\text{ }{{x}_{2}} & = & 1\text{ Machine Type II} \\ <br />
{{x}_{1}} & = & 0,\text{ }{{x}_{2}} & = & 0\text{ Machine Type III} <br />
\end{align}\,\!</math><br />
<br />
<br />
An alternative coding scheme for this example is to use a value of -1 for all indicator variables when representing the last level of the factor:<br />
<br />
<br />
::<math>\begin{align}<br />
{{x}_{1}} & = & 1,\text{ }{{x}_{2}}& = &0\text{ Machine Type I} \\ <br />
{{x}_{1}}& = & 0,\text{ }{{x}_{2}}& = &1\text{ Machine Type II} \\ <br />
{{x}_{1}}& = & -1,\text{ }{{x}_{2}}& = &-1\text{ Machine Type III} <br />
\end{align}\,\!</math><br />
<br />
<br />
Indicator variables are also referred to as dummy variables or binary variables.<br />
<br />
====Example====<br />
Consider data from two types of reactors of a chemical process shown where the yield values are recorded for various levels of factor <math>{{x}_{1}}\,\!</math>. Assuming there are no interactions between the reactor type and <math>{{x}_{1}}\,\!</math>, a regression model can be fitted to this data as shown next.<br />
<br />
Since the reactor type is a qualitative factor with two levels, it can be represented by using one indicator variable. Let <math>{{x}_{2}}\,\!</math> be the indicator variable representing the reactor type, with 0 representing the first type of reactor and 1 representing the second type of reactor.<br />
<br />
<br />
::<math>{{x}_{2}}=\{\begin{array}{*{35}{l}}<br />
0\text{ Reactor Type I} \\<br />
1\text{ Reactor Type II} \\<br />
\end{array}\,\!</math><br />
<br />
<br />
[[Image:doet5.3.png|center|323px|Yield data from the two types of reactors for a chemical process.|link=]]<br />
<br />
<br />
Data entry in the DOE folio for this example is shown in the figure after the table below. The regression model for this data is:<br />
<br />
<br />
::<math>Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon\,\!</math><br />
<br />
<br />
The <math>X\,\!</math> and <math>y\,\!</math> matrices for the given data are:<br />
<br />
<br />
<br />
[[Image:doe5_25.png|center|700px|Data from the table above as entered in Weibull++.]]<br />
<br />
<br />
The estimated regression coefficients for the model can be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
\hat{\beta }& = & {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ <br />
& = & \left[ \begin{matrix}<br />
153.7 \\<br />
2.4 \\<br />
-27.5 \\<br />
\end{matrix} \right] <br />
\end{align}\,\!</math><br />
<br />
<br />
Therefore, the fitted regression model is:<br />
<br />
<br />
::<math>\hat{y}=153.7+2.4{{x}_{1}}-27.5{{x}_{2}}\,\!</math><br />
<br />
<br />
Note that since <math>{{x}_{2}}\,\!</math> represents a qualitative predictor variable, the fitted regression model cannot be plotted simultaneously against <math>{{x}_{1}}\,\!</math> and <math>{{x}_{2}}\,\!</math> in a two-dimensional space (because the resulting surface plot will be meaningless for the dimension in <math>{{x}_{2}}\,\!</math> ). To illustrate this, a scatter plot of the data against <math>{{x}_{2}}\,\!</math> is shown in the following figure. <br />
<br />
<br />
[[Image:doe5_26.png|center|700px|Scatter plot of the observed yield values against <math>x_2\,\!</math> (reactor type)]]<br />
<br />
<br />
It can be noted that, in the case of qualitative factors, the nature of the relationship between the response (yield) and the qualitative factor (reactor type) cannot be categorized as linear, or quadratic, or cubic, etc. The only conclusion that can be arrived at for these factors is to see if these factors contribute significantly to the regression model. This can be done by employing the partial <math>F\,\!</math> test discussed in [[Multiple_Linear_Regression_Analysis#Test_on_Subsets_of_Regression_Coefficients_.28Partial_F_Test.29|Multiple Linear Regression Analysis]] (using the extra sum of squares of the indicator variables representing these factors). The results of the test for the present example are shown in the ANOVA table. The results show that <math>{{x}_{2}}\,\!</math> (reactor type) contributes significantly to the fitted regression model.<br />
<br />
<br />
[[Image:doe5_27.png|center|700px|DOE results for the data.]]<br />
<br />
===Multicollinearity===<br />
At times the predictor variables included in a multiple linear regression model may be found to be dependent on each other. Multicollinearity is said to exist in a multiple regression model with strong dependencies between the predictor variables.<br />
Multicollinearity affects the regression coefficients and the extra sum of squares of the predictor variables. In a model with multicollinearity the estimate of the regression coefficient of a predictor variable depends on what other predictor variables are included the model. The dependence may even lead to change in the sign of the regression coefficient. In a such models, an estimated regression coefficient may not be found to be significant individually (when using the <math>t\,\!</math> test on the individual coefficient or looking at the <math>p\,\!</math> value) even though a statistical relation is found to exist between the response variable and the set of the predictor variables (when using the <math>F\,\!</math> test for the set of predictor variables). Therefore, you should be careful while looking at individual predictor variables in models that have multicollinearity. Care should also be taken while looking at the extra sum of squares for a predictor variable that is correlated with other variables. This is because in models with multicollinearity the extra sum of squares is not unique and depends on the other predictor variables included in the model. <br />
<br />
<br />
Multicollinearity can be detected using the variance inflation factor (abbreviated <math>VIF\,\!</math> ). <math>VIF\,\!</math> for a coefficient <math>{{\beta }_{j}}\,\!</math> is defined as:<br />
<br />
<br />
::<math>VIF=\frac{1}{(1-R_{j}^{2})}\,\!</math><br />
<br />
<br />
where <math>R_{j}^{2}\,\!</math> is the coefficient of multiple determination resulting from regressing the <math>j\,\!</math> th predictor variable, <math>{{x}_{j}}\,\!</math>, on the remaining <math>k\,\!</math> -1 predictor variables. Mean values of <math>VIF\,\!</math> considerably greater than 1 indicate multicollinearity problems.<br />
A few methods of dealing with multicollinearity include increasing the number of observations in a way designed to break up dependencies among predictor variables, combining the linearly dependent predictor variables into one variable, eliminating variables from the model that are unimportant or using coded variables. <br />
<br />
====Example====<br />
Variance inflation factors can be obtained for the data below. <br />
<br />
[[Image:doet5.1.png|center|351px|Observed yield data for various levels of two factors.|link=]]<br />
<br />
To calculate the variance inflation factor for <math>{{x}_{1}}\,\!</math>, <math>R_{1}^{2}\,\!</math> has to be calculated.<math>R_{1}^{2}\,\!</math> is the coefficient of determination for the model when <math>{{x}_{1}}\,\!</math> is regressed on the remaining variables. In the case of this example there is just one remaining variable which is <math>{{x}_{2}}\,\!</math>. If a regression model is fit to the data, taking <math>{{x}_{1}}\,\!</math> as the response variable and <math>{{x}_{2}}\,\!</math> as the predictor variable, then the design matrix and the vector of observations are:<br />
<br />
<br />
::<math>{{X}_{{{R}_{1}}}}=\left[ \begin{matrix}<br />
1 & 29.1 \\<br />
1 & 29.3 \\<br />
. & . \\<br />
. & . \\<br />
. & . \\<br />
1 & 32.9 \\<br />
\end{matrix} \right]\text{ }{{y}_{{{R}_{1}}}}=\left[ \begin{matrix}<br />
41.9 \\<br />
43.4 \\<br />
. \\<br />
. \\<br />
. \\<br />
77.8 \\<br />
\end{matrix} \right]\,\!</math><br />
<br />
<br />
The regression sum of squares for this model can be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}= & y_{{{R}_{1}}}^{\prime }\left[ {{H}_{{{R}_{1}}}}-(\frac{1}{n})J \right]{{y}_{{{R}_{1}}}} \\ <br />
= & 1988.6 <br />
\end{align}\,\!</math><br />
<br />
<br />
where <math>{{H}_{{{R}_{1}}}}\,\!</math> is the hat matrix (and is calculated using <math>{{H}_{{{R}_{1}}}}={{X}_{{{R}_{1}}}}{{(X_{{{R}_{1}}}^{\prime }{{X}_{{{R}_{1}}}})}^{-1}}X_{{{R}_{1}}}^{\prime }\,\!</math> ) and <math>J\,\!</math> is the matrix of ones. The total sum of squares for the model can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{T}}= & {{y}^{\prime }}\left[ I-(\frac{1}{n})J \right]y \\ <br />
= & 2182.9 <br />
\end{align}\,\!</math><br />
<br />
<br />
where <math>I\,\!</math> is the identity matrix. Therefore: <br />
<br />
<br />
::<math>\begin{align}<br />
R_{1}^{2}= & \frac{S{{S}_{R}}}{S{{S}_{T}}} \\ <br />
= & \frac{1988.6}{2182.9} \\ <br />
= & 0.911 <br />
\end{align}\,\!</math><br />
<br />
<br />
Then the variance inflation factor for <math>{{x}_{1}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
VI{{F}_{1}}= & \frac{1}{(1-R_{1}^{2})} \\ <br />
= & \frac{1}{1-0.911} \\ <br />
= & 11.2 <br />
\end{align}\,\!</math><br />
<br />
<br />
The variance inflation factor for <math>{{x}_{2}}\,\!</math>, <math>VI{{F}_{2}}\,\!</math>, can be obtained in a similar manner. In the DOE folios, the variance inflation factors are displayed in the VIF column of the Regression Information table as shown in the following figure. Since the values of the variance inflation factors obtained are considerably greater than 1, multicollinearity is an issue for the data.<br />
<br />
<br />
[[Image:doe5_28.png|center|888px|Variance inflation factors for the data in.|link=]]</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=The_Weibull_Distribution&diff=65368The Weibull Distribution2018-08-09T22:21:17Z<p>Chuck Smith: </p>
<hr />
<div>{{template:LDABOOK|8|The Weibull Distribution}}<br />
The Weibull distribution is one of the most widely used lifetime distributions in reliability engineering. It is a versatile distribution that can take on the characteristics of other types of distributions, based on the value of the shape parameter, <math> {\beta} \,\!</math>. This chapter provides a brief background on the Weibull distribution, presents and derives most of the applicable equations and presents examples calculated both manually and by using ReliaSoft's [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++ software]. <br />
<br />
== Weibull Probability Density Function ==<br />
===The 3-Parameter Weibull===<br />
{{three-parameter weibull distribution}}<br />
<br />
===The 2-Parameter Weibull ===<br />
The 2-parameter Weibull ''pdf'' is obtained by setting <br />
<math> \gamma=0 \,\!</math>, and is given by: <br />
<br />
:<math> f(t)={ \frac{\beta }{\eta }}\left( {\frac{t}{\eta }}\right) ^{\beta -1}e^{-\left( { \frac{t}{\eta }}\right) ^{\beta }} \,\!</math><br />
<br />
=== The 1-Parameter Weibull===<br />
The 1-parameter Weibull ''pdf'' is obtained by again setting <br />
<math>\gamma=0 \,\!</math> and assuming <math>\beta=C=Constant \,\!</math> assumed value or: <br />
<br />
::<math> f(t)={ \frac{C}{\eta }}\left( {\frac{t}{\eta }}\right) ^{C-1}e^{-\left( {\frac{t}{ \eta }}\right) ^{C}} \,\!</math> <br />
<br />
where the only unknown parameter is the scale parameter, <math>\eta\,\!</math>. <br />
<br />
Note that in the formulation of the 1-parameter Weibull, we assume that the shape parameter <math>\beta \,\!</math> is known ''a priori'' from past experience with identical or similar products. The advantage of doing this is that data sets with few or no failures can be analyzed.<br />
<br />
==Weibull Distribution Functions==<br />
{{:Weibull Distribution Functions}}<br />
<br />
== Characteristics of the Weibull Distribution ==<br />
{{:Weibull Distribution Characteristics}}<br />
<br />
== Estimation of the Weibull Parameters ==<br />
The estimates of the parameters of the Weibull distribution can be found graphically via probability plotting paper, or analytically, using either least squares (rank regression) or maximum likelihood estimation (MLE). <br />
<br />
=== Probability Plotting ===<br />
One method of calculating the parameters of the Weibull distribution is by using probability plotting. To better illustrate this procedure, consider the following example from Kececioglu [[Appendix:_Life_Data_Analysis_References|[20]]]. <br />
<br />
Assume that six identical units are being reliability tested at the same application and operation stress levels. All of these units fail during the test after operating the following number of hours: 93, 34, 16, 120, 53 and 75. Estimate the values of the parameters for a 2-parameter Weibull distribution and determine the reliability of the units at a time of 15 hours.<br />
<br />
<br />
'''Solution'''<br />
<br />
The steps for determining the parameters of the Weibull representing the data, using probability plotting, are outlined in the following instructions. First, rank the times-to-failure in ascending order as shown next. <br />
<br />
{| border="1" align="center" style="border-collapse: collapse;" cellpadding="5" cellspacing="5"<br />
|-<br />
! valign="middle" scope="col" align="center" | Time-to-failure, <br>hours<br />
! valign="middle" scope="col" align="center" | Failure Order Number <br>out of Sample Size of 6<br />
|-<br />
| valign="middle" align="center" | 16 <br />
| valign="middle" align="center" | 1<br />
|-<br />
| valign="middle" align="center" | 34 <br />
| valign="middle" align="center" | 2<br />
|-<br />
| valign="middle" align="center" | 53 <br />
| valign="middle" align="center" | 3<br />
|-<br />
| valign="middle" align="center" | 75 <br />
| valign="middle" align="center" | 4<br />
|-<br />
| valign="middle" align="center" | 93 <br />
| valign="middle" align="center" | 5<br />
|-<br />
| valign="middle" align="center" | 120 <br />
| valign="middle" align="center" | 6<br />
|}<br />
<br />
Obtain their median rank plotting positions. Median rank positions are used instead of other ranking methods because median ranks are at a specific confidence level (50%). Median ranks can be found tabulated in many reliability books. They can also be estimated using the following equation: <br />
<br />
::<math> MR \sim { \frac{i-0.3}{N+0.4}}\cdot 100 \,\!</math> <br />
<br />
where <math>i\,\!</math> is the failure order number and <math>N\,\!</math> is the total sample size. The exact median ranks are found in Weibull++ by solving: <br />
<br />
::<math>\sum_{k=i}^N{\binom{N}{k}}{MR^k}{(1-MR)^{N-k}}=0.5=50%<br />
\,\!</math> <br />
<br />
for <math>MR\,\!</math>, where <math>N\,\!</math> is the sample size and <math>i\,\!</math> the order number. The times-to-failure, with their corresponding median ranks, are shown next. <br />
<br />
{|border="1" align="center" style="border-collapse: collapse;" cellpadding="5" cellspacing="5"<br />
|-<br />
! Time-to-failure, hours<br />
! Median Rank,%<br />
|-<br />
| 16 <br />
| 10.91<br />
|-<br />
| 34 <br />
| 26.44<br />
|-<br />
| 53 <br />
| 42.14<br />
|-<br />
| 75 <br />
| 57.86<br />
|-<br />
| 93 <br />
| 73.56<br />
|-<br />
| 120 <br />
| 89.1<br />
|}<br />
<br />
<br />
On a Weibull probability paper, plot the times and their corresponding ranks. A sample of a Weibull probability paper is given in the following figure. <br />
<br />
[[Image:WB.8 example of paper.png|center|450px| Example of Weibull probability plotting paper. ]]<br />
<br />
The points of the data in the example are shown in the figure below. Draw the best possible straight line through these points, as shown below, then obtain the slope of this line by drawing a line, parallel to the one just obtained, through the slope indicator. This value is the estimate of the shape parameter <math> \hat{\beta } \,\!</math>, in this case <math> \hat{\beta }=1.4 \,\!</math>. <br />
<br />
[[Image:WB.8 probability plotting.png|center|350px| Probability plot of data in Example 1.]]<br />
<br />
At the <math> Q(t)=63.2%\,\!</math> ordinate point, draw a straight horizontal line until this line intersects the fitted straight line. Draw a vertical line through this intersection until it crosses the abscissa. The value at the intersection of the abscissa is the estimate of <math> \hat{\eta } \,\!</math>. For this case, <math> \hat{\eta }=76 \,\!</math> hours. This is always at 63.2% since: <br />
<br />
::<math> Q(t)=1-e^{-(\frac{t}{\eta })^{\beta }}=1-e^{-1}=0.632=63.2% \,\!</math> <br />
<br />
Now any reliability value for any mission time <math>t\,\!</math> can be obtained. For example, the reliability for a mission of 15 hours, or any other time, can now be obtained either from the plot or analytically. To obtain the value from the plot, draw a vertical line from the abscissa, at hours, to the fitted line. Draw a horizontal line from this intersection to the ordinate and read <math> Q(t)\,\!</math>, in this case <math> Q(t)=9.8%\,\!</math>. Thus, <math> R(t)=1-Q(t)=90.2%\,\!</math>. This can also be obtained analytically from the Weibull reliability function since the estimates of both of the parameters are known or: <br />
<br />
::<math> R(t=15)=e^{-\left( \frac{15}{\eta }\right) ^{\beta }}=e^{-\left( \frac{15}{76 }\right) ^{1.4}}=90.2% \,\!</math><br />
<br />
====Probability Plotting for the Location Parameter, Gamma====<br />
<br />
The third parameter of the Weibull distribution is utilized when the data do not fall on a straight line, but fall on either a concave up or down curve. The following statements can be made regarding the value of <math>\gamma \,\!</math>:<br />
<br />
*'''Case 1:''' If the curve for MR versus <math>{{t}_{j}}\,\!</math> is concave down and the curve for MR versus <math>{({t}_{j}-{t}_{1})}\,\!</math> is concave up, then there exists a <math>\gamma \,\!</math> such that <math>0< \gamma < t_{1}\,\!</math>, or <math>\gamma \,\!</math> has a positive value. <br />
<br />
*'''Case 2''': If the curves for MR versus <math>{{t}_{j}}\,\!</math> and MR versus <math>{({t}_{j}-{t}_{1})}\,\!</math> are both concave up, then there exists a negative <math>\gamma \,\!</math> which will straighten out the curve of MR versus <math>{{t}_{j}}\,\!</math>. <br />
<br />
*'''Case 3''': If neither one of the previous two cases prevails, then either reject the Weibull as one capable of representing the data, or proceed with the multiple population (mixed Weibull) analysis. To obtain the location parameter, <math>\gamma \,\!</math>:<br />
<br />
::*Subtract the same arbitrary value, <math>\gamma \,\!</math>, from all the times to failure and replot the data. <br />
::*If the initial curve is concave up, subtract a negative <math>\gamma \,\!</math> from each failure time. <br />
::*If the initial curve is concave down, subtract a positive <math>\gamma \,\!</math> from each failure time. <br />
::*Repeat until the data plots on an acceptable straight line. <br />
::*The value of <math>\gamma \,\!</math> is the subtracted (positive or negative) value that places the points in an acceptable straight line. <br />
<br />
The other two parameters are then obtained using the techniques previously described. Also, it is important to note that we used the term subtract a positive or negative gamma, where subtracting a negative gamma is equivalent to adding it. Note that when adjusting for gamma, the x-axis scale for the straight line becomes <math>{({t}-\gamma)}\,\!</math>.<br />
<br />
=== Rank Regression on Y ===<br />
Performing rank regression on Y requires that a straight line mathematically be fitted to a set of data points such that the sum of the squares of the vertical deviations from the points to the line is minimized. This is in essence the same methodology as the probability plotting method, except that we use the principle of least squares to determine the line through the points, as opposed to just eyeballing it. The first step is to bring our function into a linear form. For the two-parameter Weibull distribution, the (cumulative density function) is:<br />
<br />
::<math> F(t)=1-e^{-\left( \frac{t}{\eta }\right) ^{\beta }} \,\!</math> <br />
<br />
Taking the natural logarithm of both sides of the equation yields: <br />
<br />
::<math>\ln[ 1-F(t)] =-( \frac{t}{\eta }) ^{\beta } \,\!</math><br />
<br />
::<math> \ln{ -\ln[ 1-F(t)]} =\beta \ln ( \frac{t}{ \eta }) \,\!</math><br />
<br />
or: <br />
<br />
::<math>\begin{align}<br />
\ln \{ -\ln[ 1-F(t)]\} =-\beta \ln (\eta )+\beta \ln (t)<br />
\end{align}\,\!</math><br />
<br />
Now let: <br />
<br />
::<math>\begin{align}<br />
y = \ln \{ -\ln[ 1-F(t)]\}<br />
\end{align}\,\!</math><br />
<br />
::<math>\begin{align}<br />
a = - ßln(\eta)<br />
\end{align}\,\!</math><br />
<br />
and: <br />
<br />
::<math>\begin{align}<br />
b= \beta<br />
\end{align}\,\!</math><br />
<br />
which results in the linear equation of: <br />
<br />
::<math>\begin{align}<br />
y=a+bx<br />
\end{align}\,\!</math><br />
<br />
The least squares parameter estimation method (also known as ''regression analysis'') was discussed in [[Parameter Estimation]], and the following equations for regression on Y were derived: <br />
<br />
::<math> \hat{a}=\frac{\sum\limits_{i=1}^{N}y_{i}}{N}-\hat{b}\frac{ \sum\limits_{i=1}^{N}x_{i}}{N}=\bar{y}-\hat{b}\bar{x} \,\!</math> <br />
<br />
and: <br />
<br />
::<math> \hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}x_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}x_{i}\right) ^{2}}{N}}} \,\!</math> <br />
<br />
In this case the equations for <math>{{y}_{i}}\,\!</math> and <math>{{x}_{i}}\,\!</math> are: <br />
<br />
::<math> y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\} \,\!</math> <br />
<br />
and: <br />
::<math>\begin{align}<br />
x_{i}=ln(t_{i}) <br />
\end{align}\,\!</math><br />
<br />
<br />
The <math> F(t_{i})\,\!</math> values are estimated from the median ranks.<br />
<br />
Once <math> \hat{a} \,\!</math> and <math> \hat{b} \,\!</math> are obtained, then <math> \hat{\beta } \,\!</math> and <math> \hat{\eta } \,\!</math> can easily be obtained from previous equations. <br />
<br />
'''The Correlation Coefficient'''<br />
<br />
The correlation coefficient is defined as follows: <br />
<br />
::<math> \rho ={\frac{\sigma _{xy}}{\sigma _{x}\sigma _{y}}} \,\!</math> <br />
<br />
where <math>\sigma_{xy}\,\!</math> = covariance of <math>x\,\!</math> and <math>y\,\!</math>, <math>\sigma_{x}\,\!</math> = standard deviation of <math>x\,\!</math>, and <math>\sigma_{y}\,\!</math> = standard deviation of <math>y\,\!</math>. The estimator of <math>\rho\,\!</math> is the sample correlation coefficient, <math> \hat{\rho} \,\!</math>, given by: <br />
<br />
::<math> \hat{\rho}=\frac{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})(y_{i}-\overline{y} )}{\sqrt{\sum\limits_{i=1}^{N}(x_{i}-\overline{x})^{2}\cdot \sum\limits_{i=1}^{N}(y_{i}-\overline{y})^{2}}}\,\!</math> <br />
<br />
====RRY Example====<br />
<br />
Consider the same data set from the [[The_Weibull_Distribution#Probability_Plotting|probability plotting example]] given above (with six failures at 16, 34, 53, 75, 93 and 120 hours). Estimate the parameters and the correlation coefficient using rank regression on Y, assuming that the data follow the 2-parameter Weibull distribution.<br />
<br />
'''Solution'''<br />
<br />
Construct a table as shown next. <br />
<br />
{|border="1" align="center" style="border-collapse: collapse;" cellpadding="5" cellspacing="5" <br />
|-<br />
!colspan="8" style="text-align:center"| Least Squares Analysis<br />
|- <br />
!<math>N\,\!</math><br />
!<math>T_{i}\,\!</math><br />
!<math>ln(T_{i})\,\!</math><br />
!<math>F(T_i)\,\!</math><br />
!<math>y_{i}\,\!</math><br />
!<math>(ln{T_i})^2\,\!</math><br />
!<math>{y_i}^2\,\!</math><br />
!<math>(ln{T_i})y_i\,\!</math><br />
|- <br />
|1 ||16||2.7726||0.1091||-2.1583||7.6873||4.6582||-5.9840<br />
|- <br />
|2 ||34||3.5264||0.2645||-1.1802||12.4352||1.393||-4.1620<br />
|- <br />
|3 ||53||3.9703||0.4214||-0.6030||15.7632||0.3637||-2.3943<br />
|- <br />
|4 ||75||4.3175||0.5786||-0.146||18.6407||0.0213||-0.6303<br />
|- <br />
|5 ||93||4.5326||0.7355||0.2851||20.5445||0.0813||1.2923<br />
|- <br />
|6 ||120||4.7875||0.8909||0.7955||22.9201||0.6328||3.8083<br />
|-<br />
|<math>\sum\,\!</math>|| ||23.9068|| ||-3.007||97.9909||7.1502||-8.0699<br />
|} <br />
<br />
<br />
Utilizing the values from the table, calculate <math> \hat{a} \,\!</math> and <math> \hat{b} \,\!</math> using the following equations: <br />
::<math> \hat{b} =\frac{\sum\limits_{i=1}^{6}(\ln t_{i})y_{i}-(\sum\limits_{i=1}^{6}\ln t_{i})(\sum\limits_{i=1}^{6}y_{i})/6}{ \sum\limits_{i=1}^{6}(\ln t_{i})^{2}-(\sum\limits_{i=1}^{6}\ln t_{i})^{2}/6}<br />
\,\!</math> <br />
<br />
::<math> \hat{b}=\frac{-8.0699-(23.9068)(-3.0070)/6}{97.9909-(23.9068)^{2}/6} \,\!</math><br />
<br />
or:<br />
<br />
::<math> \hat{b}=1.4301 \,\!</math> <br />
<br />
and: <br />
<br />
::<math> \hat{a}=\overline{y}-\hat{b}\overline{T}=\frac{\sum \limits_{i=1}^{N}y_{i}}{N}-\hat{b}\frac{\sum\limits_{i=1}^{N}\ln t_{i}}{N } \,\!</math> <br />
<br />
or: <br />
<br />
::<math> \hat{a}=\frac{(-3.0070)}{6}-(1.4301)\frac{23.9068}{6}=-6.19935 \,\!</math> <br />
<br />
Therefore:<br />
<br />
::<math> \hat{\beta }=\hat{b}=1.4301 \,\!</math> <br />
<br />
and: <br />
<br />
::<math> \hat{\eta }=e^{-\frac{\hat{a}}{\hat{b}}}=e^{-\frac{(-6.19935)}{ 1.4301}} \,\!</math> <br />
<br />
or: <br />
<br />
::<math> \hat{\eta }=76.318\text{ hr} \,\!</math> <br />
<br />
The correlation coefficient can be estimated as: <br />
<br />
::<math> \hat{\rho }=0.9956 \,\!</math> <br />
<br><br />
This example can be repeated in the Weibull++ software. The following plot shows the Weibull probability plot for the data set (with 90% two-sided confidence bounds). <br />
<br />
[[Image:Weibull Distribution Example 3 RRY Confidence Plot.png|center|450px| ]]<br />
<br />
If desired, the Weibull ''pdf'' representing the data set can be written as: <br />
<br />
::<math> f(t)={\frac{\beta }{\eta }}\left( {\frac{t}{\eta }}\right) ^{\beta -1}e^{-\left( {\frac{t}{\eta }}\right) ^{\beta }} \,\!</math> <br />
<br />
or: <br />
<br />
::<math> f(t)={\frac{1.4302}{76.317}}\left( {\frac{t}{76.317}}\right) ^{0.4302}e^{-\left( {\frac{t}{76.317}}\right) ^{1.4302}} \,\!</math> <br />
<br />
You can also plot this result in Weibull++, as shown next. From this point on, different results, reports and plots can be obtained.<br />
<br />
[[Image:Weibull Distribution Example 3 pdf Plot.png|center|450px]]<br />
<br />
=== Rank Regression on X ===<br />
Performing a rank regression on X is similar to the process for rank regression on Y, with the difference being that the ''horizontal'' deviations from the points to the line are minimized rather than the vertical. Again, the first task is to bring the reliability function into a linear form. This step is exactly the same as in the regression on Y analysis and all the equations apply in this case too. The derivation from the previous analysis begins on the least squares fit part, where in this case we treat as the dependent variable and as the independent variable. The best-fitting straight line to the data, for regression on X (see [[Parameter Estimation|Parameter Estimation]]), is the straight line: <br />
<br />
::<math> x= \hat{a}+\hat{b}y \,\!</math> <br />
<br />
The corresponding equations for <math> \hat{a} \,\!</math> and <math> \hat{b} \,\!</math> are: <br />
<br />
::<math> \hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\sum\limits_{i=1}^{N}x_{i}}{N} -\hat{b}\frac{\sum\limits_{i=1}^{N}y_{i}}{N} \,\!</math> <br />
<br />
and:<br />
<br />
::<math> \hat{b}={\frac{\sum\limits_{i=1}^{N}x_{i}y_{i}-\frac{\sum \limits_{i=1}^{N}x_{i}\sum\limits_{i=1}^{N}y_{i}}{N}}{\sum \limits_{i=1}^{N}y_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{N}y_{i}\right) ^{2}}{N}}} \,\!</math> <br />
<br />
where: <br />
<br />
::<math> y_{i}=\ln \left\{ -\ln [1-F(t_{i})]\right\} \,\!</math> <br />
<br />
and: <br />
<br />
::<math>\begin{align}<br />
x_{i}=\ln (t_{i})<br />
\end{align}\,\!</math><br />
<br />
<br />
and the <math>F({{t}_{i}})\,\!</math> values are again obtained from the median ranks. <br />
<br />
Once <math> \hat{a} \,\!</math> and <math> \hat{b} \,\!</math> are obtained, solve the linear equation for <math>y\,\!</math>, which corresponds to: <br />
<br />
::<math> y=-\frac{\hat{a}}{\hat{b}}+\frac{1}{\hat{b}}x \,\!</math> Solving for the parameters from above equations, we get: <br />
<br />
::<math> a=-\frac{\hat{a}}{\hat{b}}=-\beta \ln (\eta )\,\!</math><br />
<br />
and <br />
<br />
::<math> b=\frac{1}{\hat{b}}=\beta\,\!</math> <br />
<br />
The correlation coefficient is evaluated as before. <br />
====RRX Example====<br />
Again using the same data set from the [[The_Weibull_Distribution#Probability_Plotting|probability plotting]] and [[The_Weibull_Distribution#RRY_Example|RRY]] examples (with six failures at 16, 34, 53, 75, 93 and 120 hours), calculate the parameters using rank regression on X.<br />
<br />
'''Solution'''<br />
<br />
The same table constructed above for the [[The_Weibull_Distribution#RRY_Example|RRY example]] can also be applied for RRX. <br />
<br />
Using the values from this table we get: <br />
<br />
::<math> \hat{b} ={\frac{\sum\limits_{i=1}^{6}(\ln T_{i})y_{i}-\frac{ \sum\limits_{i=1}^{6}\ln T_{i}\sum\limits_{i=1}^{6}y_{i}}{6}}{ \sum\limits_{i=1}^{6}y_{i}^{2}-\frac{\left( \sum\limits_{i=1}^{6}y_{i}\right) ^{2}}{6}}}<br />
\,\!</math><br />
<br />
::<math>\hat{b} =\frac{-8.0699-(23.9068)(-3.0070)/6}{7.1502-(-3.0070)^{2}/6} \,\!</math><br />
<br />
or: <br />
<br />
::<math> \hat{b}=0.6931 \,\!</math> <br />
<br />
and: <br />
<br />
::<math> \hat{a}=\overline{x}-\hat{b}\overline{y}=\frac{\sum\limits_{i=1}^{6}\ln T_{i} }{6}-\hat{b}\frac{\sum\limits_{i=1}^{6}y_{i}}{6} \,\!</math> <br />
<br />
or: <br />
<br />
::<math> \hat{a}=\frac{23.9068}{6}-(0.6931)\frac{(-3.0070)}{6}=4.3318 \,\!</math> <br />
<br />
Therefore: <br />
<br />
::<math> \hat{\beta }=\frac{1}{\hat{b}}=\frac{1}{0.6931}=1.4428 \,\!</math> <br />
<br />
and: <br />
<br />
::<math> \hat{\eta }=e^{\frac{\hat{a}}{\hat{b}}\cdot \frac{1}{\hat{ \beta }}}=e^{\frac{4.3318}{0.6931}\cdot \frac{1}{1.4428}}=76.0811\text{ hr} \,\!</math> <br />
<br />
The correlation coefficient is: <br />
<br />
::<math> \hat{\rho }=0.9956 \,\!</math> <br />
<br />
The results and the associated graph using Weibull++ are shown next. Note that the slight variation in the results is due to the number of significant figures used in the estimation of the median ranks. Weibull++ by default uses double precision accuracy when computing the median ranks. <br />
<br />
[[Image:Weibull Distribution Example 4 RRX Plot.png|center|450px| ]]<br />
<br />
<br><br />
<br />
=== 3-Parameter Weibull Regression ===<br />
When the MR versus <math>{{t}_{j}}\,\!</math> points plotted on the Weibull probability paper do not fall on a satisfactory straight line and the points fall on a curve, then a location parameter, <math>\gamma\,\!</math>, might exist which may straighten out these points. The goal in this case is to fit a curve, instead of a line, through the data points using nonlinear regression. The Gauss-Newton method can be used to solve for the parameters, <math>\beta\,\!</math>, <math>\eta\,\!</math> and <math>\gamma\,\!</math>, by performing a Taylor series expansion on <math>F(t{_{i}};\beta ,\eta, \gamma )\,\!</math>. Then the nonlinear model is approximated with linear terms and ordinary least squares are employed to estimate the parameters. This procedure is iterated until a satisfactory solution is reached. <br />
<br />
<br />
(Note that other shapes, particularly ''S'' shapes, might suggest the existence of more than one population. In these cases, the multiple population [[The Mixed Weibull Distribution|mixed Weibull distribution]], may be more appropriate.)<br />
<br />
<br />
When you use the 3-parameter Weibull distribution, Weibull++ calculates the value of <math>\gamma\,\!</math> by utilizing an optimized Nelder-Mead algorithm and adjusts the points by this value of <math>\gamma\,\!</math> such that they fall on a straight line, and then plots both the adjusted and the original unadjusted points. To draw a curve through the original unadjusted points, if so desired, select Weibull 3P Line Unadjusted for Gamma from the ''Show Plot Line'' submenu under the ''Plot Options'' menu. The returned estimations of the parameters are the same when selecting RRX or RRY. To display the unadjusted data points and line along with the adjusted data points and line, select ''Show/Hide Items'' under the ''Plot Options ''menu and include the unadjusted data points and line as follows: <br />
<br />
[[Image:showhideplotitems.png|center]]<br />
<br />
[[Image:Weibull Distribution Example 4 Show Hide Items.png|center|450px]]<br />
<br />
The results and the associated graph for the previous example using the 3-parameter Weibull case are shown next: <br />
<br />
[[Image:Weibull Distribution Example 4 Plot.png|center|450px| ]]<br />
<br />
=== Maximum Likelihood Estimation ===<br />
As outlined in [[Parameter Estimation]], maximum likelihood estimation works by developing a likelihood function based on the available data and finding the values of the parameter estimates that maximize the likelihood function. This can be achieved by using iterative methods to determine the parameter estimate values that maximize the likelihood function, but this can be rather difficult and time-consuming, particularly when dealing with the three-parameter distribution. Another method of finding the parameter estimates involves taking the partial derivatives of the likelihood function with respect to the parameters, setting the resulting equations equal to zero and solving simultaneously to determine the values of the parameter estimates. ( Note that MLE asymptotic properties do not hold when estimating <math>\gamma\,\!</math> using MLE, as discussed in Meeker and Escobar [[Appendix:_Life_Data_Analysis_References|[27]]].) The log-likelihood functions and associated partial derivatives used to determine maximum likelihood estimates for the Weibull distribution are covered in [[Appendix:_Log-Likelihood_Equations|Appendix D]]. <br />
====MLE Example====<br />
One last time, use the same data set from the [[The_Weibull_Distribution#Probability_Plotting|probability plotting]], [[The_Weibull_Distribution#RRY_Example|RRY]] and [[The_Weibull_Distribution#RRY_Example|RRX]] examples (with six failures at 16, 34, 53, 75, 93 and 120 hours) and calculate the parameters using MLE.<br />
<br><br />
<br />
'''Solution'''<br />
<br />
In this case, we have non-grouped data with no suspensions or intervals, (i.e., complete data). The equations for the partial derivatives of the log-likelihood function are derived in [[Appendix:_Log-Likelihood_Equations|an appendix]] and given next: <br />
::<math> \frac{\partial \Lambda }{\partial \beta }=\frac{6}{\beta } +\sum_{i=1}^{6}\ln \left( \frac{T_{i}}{\eta }\right) -\sum_{i=1}^{6}\left( \frac{T_{i}}{\eta }\right) ^{\beta }\ln \left( \frac{T_{i}}{\eta }\right) =0<br />
\,\!</math><br />
<br />
And: <br />
<br />
::<math> \frac{\partial \Lambda }{\partial \eta }=\frac{-\beta }{\eta }\cdot 6+\frac{ \beta }{\eta }\sum\limits_{i=1}^{6}\left( \frac{T_{i}}{\eta }\right) ^{\beta }=0 \,\!</math> <br />
<br />
Solving the above equations simultaneously we get: <br />
<br />
::<math> \hat{\beta }=1.933,\,\!</math> <math>\hat{\eta }=73.526 \,\!</math><br />
<br />
The variance/covariance matrix is found to be:<br />
<br />
::<math> \left[ \begin{array}{ccc} \hat{Var}\left( \hat{\beta }\right) =0.4211 & \hat{Cov}( \hat{\beta },\hat{\eta })=3.272 \\<br />
<br />
\hat{Cov}(\hat{\beta },\hat{\eta })=3.272 & \hat{Var} \left( \hat{\eta }\right) =266.646 \end{array} \right] \,\!</math><br />
<br />
The results and the associated plot using Weibull++ (MLE) are shown next. <br />
<br />
[[Image:Weibull Distribution Example 5 Plot.png|center|450px| ]]<br />
<br />
You can view the variance/covariance matrix directly by clicking the '''Analysis Summary''' table in the control panel. Note that the decimal accuracy displayed and used is based on your individual Application Setup. <br />
<br />
[[Image: Weibull Distribution Example 5 Variance Matrix.png|center|450px| ]]<br />
<br />
====Unbiased MLE <math>\beta \,\!</math> ====<br />
It is well known that the MLE <math>\beta \,\!</math> is biased. The biasness will affect the accuracy of reliability prediction, especially when the number of failures are small. Weibull++ provides a simple way to correct the bias of MLE <math>\beta \,\!</math>.<br />
<br />
<br />
When there are no right censored observations in the data, the following equation provided by Hirose [[Appendix:_Life_Data_Analysis_References|[39]]] is used to calculated the unbiased <math>\beta \,\!</math>.<br />
<br />
<math>{{\beta }_{U}}=\frac{\beta }{1.0115+\frac{1.278}{r}+\frac{2.001}{{{r}^{2}}}+\frac{20.35}{{{r}^{3}}}-\frac{46.98}{{{r}^{4}}}}</math><br />
<br />
where <math>r\,\!</math> is the number of failures. <br />
<br />
When there are right censored observations in the data, the following equation provided by Ross [[Appendix:_Life_Data_Analysis_References|[40]]] is used to calculated the unbiased <math>\beta\,\!</math>.<br />
<br />
<math>{{\beta }_{U}}=\frac{\beta }{1+\frac{1.37}{r-1.92}\sqrt{\frac{n}{r}}}</math><br />
<br />
where <math>n\,\!</math> is the number of observations. <br />
<br />
<br />
The software will use the above equations only when there are more than two failures in the data set.<br />
<div class="noprint"><br />
{{Examples Box|http://www.weibull.com/hotwire/issue109/relbasics109.htm|<p>For an example on how you might correct biased estimates, see also:</p> <br />
{{Examples Link External|http://www.weibull.com/hotwire/issue109/relbasics109.htm|Unbiasing Parameters in Weibull++}}<nowiki/><br />
}}<br />
</div><br />
<br />
== Fisher Matrix Confidence Bounds ==<br />
One of the methods used by the application in estimating the different types of confidence bounds for Weibull data, the Fisher matrix method, is presented in this section. The complete derivations were presented in detail (for a general function) in [[Confidence Bounds]]. <br />
<br />
=== Bounds on the Parameters ===<br />
One of the properties of maximum likelihood estimators is that they are asymptotically normal, meaning that for large samples they are normally distributed. Additionally, since both the shape parameter estimate, <math> \hat{\beta } \,\!</math>, and the scale parameter estimate, <math> \hat{\eta }, \,\!</math> must be positive, thus <math>ln\beta \,\!</math> and <math>ln\eta \,\!</math> are treated as being normally distributed as well. The lower and upper bounds on the parameters are estimated from Nelson [[Appendix:_Life_Data_Analysis_References|[30]]]: <br />
<br />
::<math> \beta _{U} =\hat{\beta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}\text{ (upper bound)} \,\!</math><br />
<br />
::<math> \beta _{L} =\frac{\hat{\beta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \beta })}}{\hat{\beta }}}} \text{ (lower bound)} <br />
\,\!</math><br />
<br />
and: <br />
<br />
::<math> \eta _{U} =\hat{\eta }\cdot e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}\text{ (upper bound)} <br />
\,\!</math><br />
<br />
::<math> \eta _{L} =\frac{\hat{\eta }}{e^{\frac{K_{\alpha }\sqrt{Var(\hat{ \eta })}}{\hat{\eta }}}}\text{ (lower bound)} \,\!</math><br />
<br />
where <math> K_{\alpha}\,\!</math> is defined by: <br />
<br />
::<math> \alpha =\frac{1}{\sqrt{2\pi }}\int_{K_{\alpha }}^{\infty }e^{-\frac{t^{2}}{2} }dt=1-\Phi (K_{\alpha }) \,\!</math> <br />
<br />
If <math>d\,\!</math> is the confidence level, then <math> \alpha =\frac{1-\delta }{2} \,\!</math> for the two-sided bounds and <math>a = 1 - d\,\!</math> for the one-sided bounds. The variances and covariances of <math> \hat{\beta }\,\!</math> and <math> \hat{\eta }\,\!</math> are estimated from the inverse local Fisher matrix, as follows: <br />
<br />
::<math> \left( \begin{array}{cc} \hat{Var}\left( \hat{\beta }\right) & \hat{Cov}\left( \hat{ \beta },\hat{\eta }\right) <br />
\\<br />
\hat{Cov}\left( \hat{\beta },\hat{\eta }\right) & \hat{Var} \left( \hat{\eta }\right) \end{array} \right) =\left( \begin{array}{cc} -\frac{\partial ^{2}\Lambda }{\partial \beta ^{2}} & -\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } <br />
\\<br />
<br />
-\frac{\partial ^{2}\Lambda }{\partial \beta \partial \eta } & -\frac{ \partial ^{2}\Lambda }{\partial \eta ^{2}} \end{array} \right) _{\beta =\hat{\beta },\text{ }\eta =\hat{\eta }}^{-1} \,\!</math><br />
<br />
'''Fisher Matrix Confidence Bounds and Regression Analysis''' <br />
<br />
Note that the variance and covariance of the parameters are obtained from the inverse Fisher information matrix as described in this section. The local Fisher information matrix is obtained from the second partials of the likelihood function, by substituting the solved parameter estimates into the particular functions. This method is based on maximum likelihood theory and is derived from the fact that the parameter estimates were computed using maximum likelihood estimation methods. When one uses least squares or regression analysis for the parameter estimates, this methodology is theoretically then not applicable. However, if one assumes that the variance and covariance of the parameters will be similar ( One also assumes similar properties for both estimators.) regardless of the underlying solution method, then the above methodology can also be used in regression analysis. <br />
<br />
<br />
The Fisher matrix is one of the methodologies that Weibull++ uses for both MLE and regression analysis. Specifically, Weibull++ uses the likelihood function and computes the local Fisher information matrix based on the estimates of the parameters and the current data. This gives consistent confidence bounds regardless of the underlying method of solution, (i.e., MLE or regression). In addition, Weibull++ checks this assumption and proceeds with it if it considers it to be acceptable. In some instances, Weibull++ will prompt you with an "Unable to Compute Confidence Bounds" message when using regression analysis. This is an indication that these assumptions were violated.<br />
<br />
=== Bounds on Reliability ===<br />
The bounds on reliability can easily be derived by first looking at the general extreme value distribution (EVD). Its reliability function is given by: <br />
<br />
::<math> R(t)=e^{-e^{\left( \frac{t-p_{1}}{p_{2}}\right) }} \,\!</math> <br />
<br />
By transforming <math>t = \ln t\,\!</math> and converting <math> p_{1}=\ln({\eta})\,\!</math>, <math> p_{2}=\frac{1}{ \beta } \,\!</math>, the above equation becomes the Weibull reliability function: <br />
<br />
::<math> R(t)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}=e^{-e^{\ln \left( \frac{t }{\eta }\right) ^{\beta }}}=e^{-\left( \frac{t}{\eta }\right) ^{\beta }} \,\!</math><br />
<br />
with: <br />
<br />
::<math> R(T)=e^{-e^{\beta \left( \ln t-\ln \eta \right) }}\,\!</math> <br />
<br />
set:<br />
<br />
:: <math> u=\beta \left( \ln t-\ln \eta \right) \,\!</math> <br />
<br />
The reliability function now becomes: <br />
<br />
::<math> R(T)=e^{-e^{u}} \,\!</math> <br />
<br />
The next step is to find the upper and lower bounds on <math>u\,\!</math>. Using the equations derived in [[Confidence Bounds]], the bounds on are then estimated from Nelson [[Appendix:_Life_Data_Analysis_References|[30]]]: <br />
<br />
::<math> u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} <br />
\,\!</math><br />
<br />
::<math> u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} <br />
\,\!</math><br />
<br />
where: <br />
<br />
::<math> Var(\hat{u}) =\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta }) +2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u }{\partial \eta }\right) Cov\left( \hat{\beta },\hat{\eta }\right) \,\!</math><br />
<br />
or: <br />
<br />
::<math> Var(\hat{u}) =\frac{\hat{u}^{2}}{\hat{\beta }^{2}}Var(\hat{ \beta })+\frac{\hat{\beta }^{2}}{\hat{\eta }^{2}}Var(\hat{\eta }) -\left( \frac{2\hat{u}}{\hat{\eta }}\right) Cov\left( \hat{\beta }, \hat{\eta }\right). \,\!</math><br />
<br />
The upper and lower bounds on reliability are: <br />
<br />
::<math> R_{U} =e^{-e^{u_{L}}}\text{ (upper bound)}\,\!</math><br />
<br />
::<math> R_{L} =e^{-e^{u_{U}}}\text{ (lower bound)}\,\!</math><br />
<br />
'''Other Weibull Forms'''<br />
<br />
Weibull++ makes the following assumptions/substitutions when using the three-parameter or one-parameter forms: <br />
<br />
*For the 3-parameter case, substitute <math> t=\ln (t-\hat{\gamma }) \,\!</math> (and by definition <math>\gamma\, < t\!</math>), instead of <math>\ln t\,\!</math>. (Note that this is an approximation since it eliminates the third parameter and assumes that <math> Var( \hat{\gamma })=0. \,\!</math>) <br />
*For the 1-parameter, <math> Var(\hat{\beta })=0, \,\!</math> thus:<br />
<br />
::<math> Var(\hat{u})=\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })=\left( \frac{\hat{\beta }}{\hat{\eta }}\right) ^{2}Var(\hat{\eta }) \,\!</math> <br />
<br />
Also note that the time axis (x-axis) in the three-parameter Weibull plot in Weibull++ is not <math>{t}\,\!</math> but <math>t - \gamma\,\!</math>. This means that one must be cautious when obtaining confidence bounds from the plot. If one desires to estimate the confidence bounds on reliability for a given time <math>{{t}_{0}}\,\!</math> from the adjusted plotted line, then these bounds should be obtained for a <math>{{t}_{0}} - \gamma\,\!</math> entry on the time axis.<br />
<br />
=== Bounds on Time ===<br />
The bounds around the time estimate or reliable life estimate, for a given Weibull percentile (unreliability), are estimated by first solving the reliability equation with respect to time, as discussed in Lloyd and Lipow [[Appendix:_Life_Data_Analysis_References|[24]]] and in Nelson [[Appendix:_Life_Data_Analysis_References|[30]]]: <br />
<br />
::<math> \ln R =-\left( \frac{t}{\eta }\right) ^{\beta } <br />
\,\!</math> <br />
<br />
::<math> \ln (-\ln R) =\beta \ln \left( \frac{t}{\eta }\right) \,\!</math><br />
<br />
::<math>\begin{align}<br />
\ln (-\ln R) =\beta (\ln t-\ln \eta ) <br />
\end{align}\,\!</math><br />
<br />
or: <br />
<br />
::<math> u=\frac{1}{\beta }\ln (-\ln R)+\ln \eta \,\!</math> <br />
<br />
where <math>u = \ln t\,\!</math> .<br />
<br />
The upper and lower bounds on are estimated from:<br />
<br />
::<math> u_{U} =\hat{u}+K_{\alpha }\sqrt{Var(\hat{u})} \,\!</math><br />
<br />
::<math> u_{L} =\hat{u}-K_{\alpha }\sqrt{Var(\hat{u})} \,\!</math><br />
<br />
where: <br />
<br />
::<math> Var(\hat{u})=\left( \frac{\partial u}{\partial \beta }\right) ^{2}Var( \hat{\beta })+\left( \frac{\partial u}{\partial \eta }\right) ^{2}Var( \hat{\eta })+2\left( \frac{\partial u}{\partial \beta }\right) \left( \frac{\partial u}{\partial \eta }\right) Cov\left( \hat{\beta },\hat{ \eta }\right) \,\!</math> <br />
<br />
or: <br />
::<math> Var(\hat{u}) =\frac{1}{\hat{\beta }^{4}}\left[ \ln (-\ln R)\right] ^{2}Var(\hat{\beta })+\frac{1}{\hat{\eta }^{2}}Var(\hat{\eta })+2\left( -\frac{\ln (-\ln R)}{\hat{\beta }^{2}}\right) \left( \frac{1}{ \hat{\eta }}\right) Cov\left( \hat{\beta },\hat{\eta }\right) \,\!</math><br />
<br />
The upper and lower bounds are then found by: <br />
<br />
::<math> T_{U} =e^{u_{U}}\text{ (upper bound)} \,\!</math><br />
<br />
::<math> T_{L} =e^{u_{L}}\text{ (lower bound)} \,\!</math><br />
<br />
== Likelihood Ratio Confidence Bounds ==<br />
As covered in [[Confidence Bounds]], the likelihood confidence bounds are calculated by finding values for <math>{{\theta}_{1}}\,\!</math> and <math>{{\theta}_{2}}\,\!</math> that satisfy: <br />
<br />
::<math> -2\cdot \text{ln}\left( \frac{L(\theta _{1},\theta _{2})}{L(\hat{\theta }_{1}, \hat{\theta }_{2})}\right) =\chi _{\alpha ;1}^{2} \,\!</math> <br />
<br />
This equation can be rewritten as: <br />
<br />
::<math> L(\theta _{1},\theta _{2})=L(\hat{\theta }_{1},\hat{\theta } _{2})\cdot e^{\frac{-\chi _{\alpha ;1}^{2}}{2}} \,\!</math> <br />
<br />
For complete data, the likelihood function for the Weibull distribution is given by:<br />
<br />
::<math> L(\beta ,\eta )=\prod_{i=1}^{N}f(x_{i};\beta ,\eta )=\prod_{i=1}^{N}\frac{ \beta }{\eta }\cdot \left( \frac{x_{i}}{\eta }\right) ^{\beta -1}\cdot e^{-\left( \frac{x_{i}}{\eta }\right) ^{\beta }} \,\!</math> <br />
<br />
For a given value of <math>\alpha\,\!</math>, values for <math>\beta\,\!</math> and <math>\eta\,\!</math> can be found which represent the maximum and minimum values that satisfy the above equation. These represent the confidence bounds for the parameters at a confidence level <math>\delta\,\!</math>, where <math>\alpha = \delta\,\!</math> for two-sided bounds and <math>\alpha = 2\delta - 1\,\!</math> for one-sided. <br />
<br />
Similarly, the bounds on time and reliability can be found by substituting the Weibull reliability equation into the likelihood function so that it is in terms of <math>\beta\,\!</math> and time or reliability, as discussed in [[Confidence Bounds]]. The likelihood ratio equation used to solve for bounds on time (Type 1) is: <br />
<br />
<br />
::<math> L(\beta ,t)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] \,\!</math> <br />
<br />
The likelihood ratio equation used to solve for bounds on reliability (Type 2) is: <br />
<br />
::<math> L(\beta ,R)=\prod_{i=1}^{N}\frac{\beta }{\left( \frac{t}{(-\text{ln}(R))^{ \frac{1}{\beta }}}\right) }\cdot \left( \frac{x_{i}}{\left( \frac{t}{(-\text{ ln}(R))^{\frac{1}{\beta }}}\right) }\right) ^{\beta -1}\cdot \text{exp}\left[ -\left( \frac{x_{i}}{\left( \frac{t}{(-\text{ln}(R))^{\frac{1}{\beta }}} \right) }\right) ^{\beta }\right] \,\!</math><br />
<br />
== Bayesian Confidence Bounds ==<br />
=== Bounds on Parameters ===<br />
Bayesian Bounds use non-informative prior distributions for both parameters. From [[Confidence Bounds]], we know that if the prior distribution of <math>\eta\,\!</math> and <math>\beta\,\!</math> are independent, the posterior joint distribution of <math>\eta\,\!</math> and <math>\beta\,\!</math> can be written as: <br />
<br />
::<math> f(\eta ,\beta |Data)= \dfrac{L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )}{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } \,\!</math> <br />
<br />
The marginal distribution of <math>\eta\,\!</math> is: <br />
<br />
::<math> f(\eta |Data) =\int_{0}^{\infty }f(\eta ,\beta |Data)d\beta =<br />
\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\beta }{\int_{0}^{\infty }\int_{-\infty }^{\infty }L(Data|\eta ,\beta )\varphi (\eta )\varphi (\beta )d\eta d\beta } <br />
\,\!</math> <br />
<br />
where: <math> \varphi (\beta )=\frac{1}{\beta } \,\!</math> is the non-informative prior of <math>\beta\,\!</math>. <math> \varphi (\eta )=\frac{1}{\eta } \,\!</math> is the non-informative prior of <math>\eta\,\!</math>. Using these non-informative prior distributions, <math>f(\eta|Data)\,\!</math> can be rewritten as: <br />
<br />
::<math> f(\eta |Data)=\dfrac{\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta } \frac{1}{\eta }d\beta }{\int_{0}^{\infty }\int_{0}^{\infty }L(Data|\eta ,\beta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\!</math> <br />
<br />
The one-sided upper bounds of <math>\eta\,\!</math> is: <br />
<br />
::<math> CL=P(\eta \leq \eta _{U})=\int_{0}^{\eta _{U}}f(\eta |Data)d\eta \,\!</math><br />
<br />
The one-sided lower bounds of <math>\eta\,\!</math> is: <br />
<br />
::<math> 1-CL=P(\eta \leq \eta _{L})=\int_{0}^{\eta _{L}}f(\eta |Data)d\eta \,\!</math> <br />
<br />
The two-sided bounds of <math>\eta\,\!</math> is: <br />
<br />
::<math> CL=P(\eta _{L}\leq \eta \leq \eta _{U})=\int_{\eta _{L}}^{\eta _{U}}f(\eta |Data)d\eta \,\!</math> <br />
<br />
Same method is used to obtain the bounds of <math>\beta\,\!</math>.<br />
<br />
=== Bounds on Reliability ===<br />
::<math> CL=\Pr (R\leq R_{U})=\Pr (\eta \leq T\exp (-\frac{\ln (-\ln R_{U})}{\beta })) \,\!</math> <br />
<br />
From the posterior distribution of <math>\eta\,\!</math> we have: <br />
<br />
::<math> CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T\exp (-\dfrac{\ln (-\ln R_{U})}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\!</math> <br />
<br />
The above equation is solved numerically for <math>{{R}_{U}}\,\!</math>. The same method can be used to calculate the one sided lower bounds and two-sided bounds on reliability.<br />
<br />
=== Bounds on Time ===<br />
From [[Confidence Bounds]], we know that: <br />
<br />
::<math> CL=\Pr (T\leq T_{U})=\Pr (\eta \leq T_{U}\exp (-\frac{\ln (-\ln R)}{\beta })) \,\!</math> <br />
<br />
From the posterior distribution of <math>\eta\,\!</math>, we have: <br />
<br />
::<math> CL=\dfrac{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{T_{U}\exp (-\dfrac{ \ln (-\ln R)}{\beta })}L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta }{\int\nolimits_{0}^{\infty }\int\nolimits_{0}^{\infty }L(\beta ,\eta )\frac{1}{\beta }\frac{1}{\eta }d\eta d\beta } \,\!</math> <br />
<br />
The above equation is solved numerically for <math>{{T}_{U}}\,\!</math>. The same method can be applied to calculate one sided lower bounds and two-sided bounds on time.<br />
<br />
== Bayesian-Weibull Analysis ==<br />
{{:Bayesian-Weibull_Analysis}}<br />
<br />
==Weibull Distribution Examples==<br />
{{:Weibull Distribution Examples}}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:LDABOOK&diff=65367Template:LDABOOK2018-08-09T22:20:11Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(42,145,198); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:Weibullbox.png|100px|link=Life_Data_Analysis_Reference_Book]] <br>{{font|[[Life_Data_Analysis_Reference_Book|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:synthesis-icon.png|link=https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR|left]]<p style="text-align: left;">'''Available Software:''' <br>[https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++]</p><br />
[[Image:Examples_icon.png|link=Weibull++_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[Weibull++ Examples|Weibull++ Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf Life Data Analysis (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the Weibull++ book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:LDABOOK|XX}}</pre><br />
{{Template:LDABOOK|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Simple_Linear_Regression_Analysis&diff=65366Simple Linear Regression Analysis2018-08-09T22:18:53Z<p>Chuck Smith: </p>
<hr />
<div>{{Template:Doebook|3}}<br />
Regression analysis is a statistical technique that attempts to explore and model the relationship between two or more variables. For example, an analyst may want to know if there is a relationship between road accidents and the age of the driver. Regression analysis forms an important part of the statistical analysis of the data obtained from designed experiments and is discussed briefly in this chapter. Every experiment analyzed in a Weibull++ DOE foilo includes regression results for each of the responses. These results, along with the results from the analysis of variance (explained in the [[One Factor Designs]] and [[General Full Factorial Designs]] chapters), provide information that is useful to identify significant factors in an experiment and explore the nature of the relationship between these factors and the response. Regression analysis forms the basis for all [https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++] DOE folio calculations related to the sum of squares used in the analysis of variance. The reason for this is explained in [[Use_of_Regression_to_Calculate_Sum_of_Squares|Appendix B]]. Additionally, DOE folios also include a regression tool to see if two or more variables are related, and to explore the nature of the relationship between them. <br />
<br />
This chapter discusses simple linear regression analysis while a [[Multiple_Linear_Regression_Analysis|subsequent chapter]] focuses on multiple linear regression analysis.<br />
<br />
==Simple Linear Regression Analysis== <br />
A linear regression model attempts to explain the relationship between two or more variables using a straight line. Consider the data obtained from a chemical process where the yield of the process is thought to be related to the reaction temperature (see the table below).<br />
<br />
<br />
[[Image:doet4.1.png|center|343px|Yield data observations of a chemical process at different values of reaction temperature.|link=]]<br />
<br />
<br />
This data can be entered in the DOE folio as shown in the following figure:<br />
<br />
<br />
[[Image:doe4_1.png|center|530px|Data entry in the DOE folio for the observations.|link=]]<br />
<br />
<br />
And a scatter plot can be obtained as shown in the following figure. In the scatter plot yield, <math>y_i\,\!</math> is plotted for different temperature values, <math>x_i\,\!</math>.<br />
<br />
<br />
[[Image:doe4_2.png|center|650px|Scatter plot for the data.|link=]]<br />
<br />
<br />
It is clear that no line can be found to pass through all points of the plot. Thus no functional relation exists between the two variables <math>x\,\!</math> and <math>Y\,\!</math>. However, the scatter plot does give an indication that a straight line may exist such that all the points on the plot are scattered randomly around this line. A statistical relation is said to exist in this case. The statistical relation between <math>x\,\!</math> and <math>Y\,\!</math> may be expressed as follows:<br />
<br />
<br />
::<math>Y=\beta_0+\beta_1{x}+\epsilon\,\!</math><br />
<br />
<br />
The above equation is the linear regression model that can be used to explain the relation between <math>x\,\!</math> and <math>Y\,\!</math> that is seen on the scatter plot above. In this model, the mean value of <math>Y\,\!</math> (abbreviated as <math>E(Y)\,\!</math>) is assumed to follow the linear relation:<br />
<br />
<br />
::<math>E(Y) = \beta_0+\beta_1{x}\,\!</math><br />
<br />
<br />
The actual values of <math>Y\,\!</math> (which are observed as yield from the chemical process from time to time and are random in nature) are assumed to be the sum of the mean value, <math>E(Y)\,\!</math>, and a random error term, <math>\epsilon\,\!</math>:<br />
<br />
<br />
::<math>\begin{align}Y = & E(Y)+\epsilon \\ <br />
= & \beta_0+\beta_1{x}+\epsilon\end{align}\,\!</math><br />
<br />
<br />
The regression model here is called a ''simple'' linear regression model because there is just one independent variable, <math>x\,\!</math>, in the model. In regression models, the independent variables are also referred to as regressors or predictor variables. The dependent variable, <math>Y\,\!</math> , is also referred to as the response. The slope, <math>\beta_1\,\!</math>, and the intercept, <math>\beta_0\,\!</math> , of the line <math>E(Y)=\beta_0+\beta_1{x}\,\!</math> are called ''regression coefficients''. The slope, <math>\beta_1\,\!</math>, can be interpreted as the change in the mean value of <math>Y\,\!</math> for a unit change in <math>x\,\!</math>.<br />
<br />
The random error term, <math>\epsilon\,\!</math>, is assumed to follow the normal distribution with a mean of 0 and variance of <math>\sigma^2\,\!</math>. Since <math>Y\,\!</math> is the sum of this random term and the mean value, <math>E(Y)\,\!</math>, which is a constant, the variance of <math>Y\,\!</math> at any given value of <math>x\,\!</math> is also <math>\sigma^2\,\!</math>. Therefore, at any given value of <math>x\,\!</math>, say <math>x_i\,\!</math>, the dependent variable <math>Y\,\!</math> follows a normal distribution with a mean of <math>\beta_0+\beta_1{x_i}\,\!</math> and a standard deviation of <math>\sigma\,\!</math>. This is illustrated in the following figure.<br />
<br />
[[Image:doe4.3.png|center|583px|The normal distribution of <math>Y\,\!</math> for two values of <math>x\,\!</math>. Also shown is the true regression line and the values of the random error term, <math>\epsilon\,\!</math>, corresponding to the two <math>x\,\!</math> values. The true regression line and <math>\epsilon\,\!</math> are usually not known.|link=]]<br />
<br />
===Fitted Regression Line===<br />
The true regression line is usually not known. However, the regression line can be estimated by estimating the coefficients <math>\beta_1\,\!</math> and <math>\beta_0\,\!</math> for an observed data set. The estimates, <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math>, are calculated using least squares. (For details on least square estimates, refer to [[Appendix:_Life_Data_Analysis_References|Hahn & Shapiro (1967)]].) The estimated regression line, obtained using the values of <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math>, is called the ''fitted line''. The least square estimates, <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math>, are obtained using the following equations:<br />
<br />
<br />
::<math>\hat{\beta}_1 = \frac{\sum_{i=1}^n y_i x_i- \frac{(\sum_{i=1}^n y_i) (\sum_{i=1}^n x_i)}{n}}{\sum_{i=1}^n (x_i-\bar{x})^2}\,\!</math><br />
::<math>\hat{\beta}_0=\bar{y}-\hat{\beta}_1 \bar{x}\,\!</math><br />
<br />
<br />
where <math>\bar{y}\,\!</math> is the mean of all the observed values and <math>\bar{x}\,\!</math> is the mean of all values of the predictor variable at which the observations were taken. <math>\bar{y}\,\!</math> is calculated using <math>\bar{y}=(1/n)\sum)_{i=1}^n y_i\,\!</math> and <math>\bar{x}\,\!</math> is calculated using <math>\bar{x}=(1/n)\sum)_{i=1}^n x_i\,\!</math>.<br />
<br />
<br />
Once <math>\hat{\beta}_1\,\!</math> and <math>\hat{\beta}_0\,\!</math> are known, the fitted regression line can be written as:<br />
<br />
<br />
::<math>\hat{y}=\hat{\beta}_0+\hat{\beta}_1 x\,\!</math><br />
<br />
<br />
where <math>\hat{y}\,\!</math> is the fitted or estimated value based on the fitted regression model. It is an estimate of the mean value, <math>E(Y)\,\!</math>. The fitted value,<math>\hat{y}_i\,\!</math>, for a given value of the predictor variable, <math>x_i\,\!</math>, may be different from the corresponding observed value, <math>y_i\,\!</math>. The difference between the two values is called the ''residual'', <math>e_i\,\!</math>:<br />
<br />
<br />
::<math>e_i=y_i-\hat{y}_i\,\!</math><br />
<br />
<br />
====Calculation of the Fitted Line Using Least Square Estimates====<br />
The least square estimates of the regression coefficients can be obtained for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] as follows:<br />
<br />
<br />
::<math>\begin{align}\hat{\beta}_1 = & \frac{\sum_{i=1}^n y_i x_i- \frac{(\sum_{i=1}^n y_i) (\sum_{i=1}^n x_i)}{n}}{\sum_{i=1}^n (x_i-\bar{x})^2} \\<br />
= & \frac{322516-\frac{4158 x 1871}{25}}{5697.36} \\<br />
= & 1.9952 \approx 2.00\end{align}\,\!</math><br />
<br />
<br />
::<math>\begin{align}\hat{\beta}_0 = & \bar{y}-\hat{\beta}_1 \bar{x} \\<br />
= & 166.32 - 2 x 74.84 \\<br />
= & 17.0016 \approx 17.00\end{align}\,\!</math><br />
<br />
<br />
Knowing <math>\hat{\beta}_0\,\!</math> and <math>\hat{\beta}_1\,\!</math>, the fitted regression line is:<br />
<br />
<br />
::<math>\begin{align}\hat{y} = & \hat{\beta}_0 + \hat{\beta}_1 x \\<br />
= & 17.0016 + 1.9952 \times x \\<br />
\approx & 17 + 2{x}\end{align}\,\!</math><br />
<br />
<br />
This line is shown in the figure below.<br />
<br />
<br />
[[Image:doe4.4.png|center|637px|Fitted regression line for the data. Also shown is the residual for the 21st observation.|link=]]<br />
<br />
<br />
Once the fitted regression line is known, the fitted value of <math>Y\,\!</math> corresponding to any observed data point can be calculated. For example, the fitted value corresponding to the 21st observation in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] is:<br />
<br />
<br />
::<math>\begin{align}\hat{y}_{21}= & \hat{\beta}_0 + \hat{\beta}_1 x_{21} \\<br />
= & (17.0016) + (1.9952) \times 93 \\<br />
= & 202.6\end{align}\,\!</math><br />
<br />
<br />
The observed response at this point is <math>y_{21}=194\,\!</math>. Therefore, the residual at this point is:<br />
<br />
<br />
::<math>\begin{align}e_{21} = & y_{21}-\hat{y}_{21} \\<br />
= & 194-202.6 \\<br />
= & -8.6\end{align}\,\!</math><br />
<br />
<br />
In DOE folios, fitted values and residuals can be calculated. The values are shown in the figure below.<br />
<br />
<br />
[[Image:doe4_5.png|center|880px|Fitted values and residuals for the data.|link=]]<br />
<br />
==Hypothesis Tests in Simple Linear Regression==<br />
<br />
The following sections discuss hypothesis tests on the regression coefficients in simple linear regression. These tests can be carried out if it can be assumed that the random error term, <math>\epsilon\,\!</math>, is normally and independently distributed with a mean of zero and variance of <math>\sigma^2\,\!</math>. <br />
<br />
===t Tests===<br />
<br />
The <math>t\,\!</math> tests are used to conduct hypothesis tests on the regression coefficients obtained in simple linear regression. A statistic based on the <math>t\,\!</math> distribution is used to test the two-sided hypothesis that the true slope, <math>\beta_1\,\!</math>, equals some constant value, <math>\beta_{1,0}\,\!</math>. The statements for the hypothesis test are expressed as:<br />
<br />
<br />
::<math>\begin{align}H_0 & : & \beta_1=\beta_{1,0} \\<br />
H_1 & : & \beta_{1}\ne\beta_{1,0}\end{align}\,\!</math><br />
<br />
<br />
The test statistic used for this test is:<br />
<br />
<br />
::<math>T_0=\frac{\hat{\beta}_1-\beta_{1,0}}{se(\hat{\beta}_1)}\,\!</math><br />
<br />
<br />
where <math>\hat{\beta}_1\,\!</math> is the least square estimate of <math>\beta_1\,\!</math>, and <math>se(\hat{\beta}_1)\,\!</math> is its standard error. The value of <math>se(\hat{\beta}_1)\,\!</math> can be calculated as follows:<br />
<br />
<br />
:<math>se(\hat{\beta}_1)= \sqrt{\frac{\frac{\displaystyle \sum_{i=1}^n e_i^2}{n-2}}{\displaystyle \sum_{i=1}^n (x_i-\bar{x})^2}}\,\!</math><br />
<br />
<br />
The test statistic, <math>T_0\,\!</math> , follows a <math>t\,\!</math> distribution with <math>(n-2)\,\!</math> degrees of freedom, where <math>n\,\!</math> is the total number of observations. The null hypothesis, <math>H_0\,\!</math>, is accepted if the calculated value of the test statistic is such that:<br />
<br />
<br />
::<math>-t_{\alpha/2,n-2}<T_0<t_{\alpha/2,n-2}\,\!</math><br />
<br />
<br />
where <math>t_{\alpha/2,n-2}\,\!</math> and <math>-t_{\alpha/2,n-2}\,\!</math> are the critical values for the two-sided hypothesis. <math>t_{\alpha/2,n-2}\,\!</math> is the percentile of the <math>t\,\!</math> distribution corresponding to a cumulative probability of <math>(1-\alpha/2)\,\!</math> and <math>\alpha\,\!</math> is the significance level. <br />
<br />
If the value of <math>\beta_{1,0}\,\!</math> used is zero, then the hypothesis tests for the significance of regression. In other words, the test indicates if the fitted regression model is of value in explaining variations in the observations or if you are trying to impose a regression model when no true relationship exists between <math>x\,\!</math> and <math>Y\,\!</math>. Failure to reject <math>H_0:\beta_1=0\,\!</math> implies that no linear relationship exists between <math>x\,\!</math> and <math>Y\,\!</math>. This result may be obtained when the scatter plots of against are as shown in (a) of the following figure and (b) of the following figure. (a) represents the case where no model exits for the observed data. In this case you would be trying to fit a regression model to noise or random variation. (b) represents the case where the true relationship between <math>x\,\!</math> and <math>Y\,\!</math> is not linear. (c) and (d) represent the case when <math>H_0:\beta_1=0\,\!</math> is rejected, implying that a model does exist between <math>x\,\!</math> and <math>Y\,\!</math>. (c) represents the case where the linear model is sufficient. In the following figure, (d) represents the case where a higher order model may be needed.<br />
<br />
[[Image:doe4.6.png|center|500px|Possible scatter plots of <math>y\,\!</math> against <math>x\,\!</math>. Plots (a) and (b) represent cases when <math>H_0:\beta_1=0\,\!</math> is not rejected. Plots (c) and (d) represent cases when <math>H_0:\beta_1=0\,\!</math> is rejected.|link=]]<br />
<br />
<br />
A similar procedure can be used to test the hypothesis on the intercept. The test statistic used in this case is:<br />
<br />
<br />
::<math>T_0=\frac{\hat{\beta}_0-\beta_{0,0}}{se(\hat{\beta}_0)}\,\!</math><br />
<br />
<br />
where <math>\hat{\beta}_0\,\!</math> is the least square estimate of <math>\beta_0\,\!</math>, and <math>se(\hat{\beta}_0)\,\!</math> is its standard error which is calculated using:<br />
<br />
<br />
:<math>se(\hat{\beta}_0)= \sqrt{\frac{\displaystyle\sum_{i=1}^n e_i^2}{n-2} \Bigg[ \frac{1}{n}+\frac{\bar{x}^2}{\displaystyle\sum_{i=1}^n (x_i-\bar{x})^2} \Bigg]}\,\!</math><br />
<br />
<br />
'''Example'''<br />
<br />
<br />
The test for the significance of regression for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] is illustrated in this example. The test is carried out using the <math>t\,\!</math> test on the coefficient <math>\beta_1\,\!</math>. The hypothesis to be tested is <math>H_0 : \beta_1 = 0\,\!</math>. To calculate the statistic to test <math>H_0\,\!</math>, the estimate, <math>\hat{\beta}_1\,\!</math>, and the standard error, <math>se(\hat{\beta}_1)\,\!</math>, are needed. The value of <math>\hat{\beta}_1\,\!</math> was obtained in [[Simple_Linear_Regression_Analysis#Fitted_Regression_Line|this section]]. The standard error can be calculated as follows:<br />
<br />
<br />
:<math>\begin{align}se(\hat{\beta}_1) & = \sqrt{\frac{\frac{\displaystyle \sum_{i=1}^n e_i^2}{n-2}}{\displaystyle \sum_{i=1}^n (x_i-\bar{x})^2}} \\<br />
= & \sqrt{\frac{(371.627/23)}{5679.36}} \\<br />
= & 0.0533\end{align}\,\!</math><br />
<br />
<br />
Then, the test statistic can be calculated using the following equation:<br />
<br />
<br />
::<math>\begin{align}t_0 & = & \frac{\hat{\beta}_1-\beta_{1,0}}{se(\hat{\beta}_0)} <br />
= & \frac{1.9952-0}{0.0533}<br />
= & 37.4058\end{align}\,\!</math><br />
<br />
<br />
The <math>p\,\!</math> value corresponding to this statistic based on the <math>t\,\!</math> distribution with 23 (n-2 = 25-2 = 23) degrees of freedom can be obtained as follows:<br />
<br />
<br />
::<math>\begin{align}p value = & 2\times (1-P(T\le t_0) \\<br />
= & 2 \times (1-0.999999) \\<br />
= & 0\end{align}\,\!</math><br />
<br />
<br />
Assuming that the desired significance level is 0.1, since <math>p\,\!</math> value < 0.1, <math>H_0 : \beta_1=0\,\!</math> is rejected indicating that a relation exists between temperature and yield for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]]. Using this result along with the scatter plot, it can be concluded that the relationship between temperature and yield is linear.<br />
<br />
In Weibull++ DOE folios, information related to the <math>t\,\!</math> test is displayed in the Regression Information table as shown in the following figure. In this table the <math>t\,\!</math> test for <math>\beta_1\,\!</math> is displayed in the row for the term Temperature because <math>\beta_1\,\!</math> is the coefficient that represents the variable temperature in the regression model. The columns labeled Standard Error, T Value and P Value represent the standard error, the test statistic for the test and the <math>p\,\!</math> value for the <math>t\,\!</math> test, respectively. These values have been calculated for <math>\beta_1\,\!</math> in this example. The Coefficient column represents the estimate of regression coefficients. The Effect column represents values obtained by multiplying the coefficients by a factor of 2. This value is useful in the case of two factor experiments and is explained in [[Two_Level_Factorial_Experiments| Two Level Factorial Experiments]]. Columns Low Confidence and High Confidence represent the limits of the confidence intervals for the regression coefficients and are explained in [[Simple_Linear_Regression_Analysis#Confidence_Interval_on_Regression_Coefficients|Confidence Interval on Regression Coefficients]].<br />
<br />
<br />
[[Image:doe4_7.png|center|826px|Regression results for the data.|link=]]<br />
<br />
===Analysis of Variance Approach to Test the Significance of Regression===<br />
<br />
The analysis of variance (ANOVA) is another method to test for the significance of regression. As the name implies, this approach uses the variance of the observed data to determine if a regression model can be applied to the observed data. The observed variance is partitioned into components that are then used in the test for significance of regression.<br />
<br />
====Sum of Squares====<br />
<br />
The total variance (i.e., the variance of all of the observed data) is estimated using the observed data. As mentioned in [[Statistical_Background_on_DOE| Statistical Background]], the variance of a population can be estimated using the sample variance, which is calculated using the following relationship:<br />
<br />
<br />
::<math>{{s}^{2}}=\frac{\underset{i=1}{\overset{n}{\mathop{\sum }}}\,{{({{y}_{i}}-\bar{y})}^{2}}}{n-1}\,\!</math><br />
<br />
<br />
The quantity in the numerator of the previous equation is called the ''sum of squares''. It is the sum of the square of deviations of all the observations, <math>{{y}_{i}}\,\!</math>, from their mean, <math>\bar{y}\,\!</math>. In the context of ANOVA this quantity is called the ''total sum of squares'' (abbreviated <math>S{{S}_{T}}\,\!</math>) because it relates to the total variance of the observations. Thus:<br />
<br />
<br />
::<math>S{{S}_{T}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}}\,\!</math><br />
<br />
<br />
The denominator in the relationship of the sample variance is the number of degrees of freedom associated with the sample variance. Therefore, the number of degrees of freedom associated with <math>S{{S}_{T}}\,\!</math>, <math>dof(S{{S}_{T}})\,\!</math>, is <math>n-1\,\!</math>. The sample variance is also referred to as a ''mean square'' because it is obtained by dividing the sum of squares by the respective degrees of freedom. Therefore, the total mean square (abbreviated <math>M{{S}_{T}}\,\!</math>) is:<br />
<br />
<br />
::<math>M{{S}_{T}}=\frac{S{{S}_{T}}}{dof(S{{S}_{T}})}=\frac{S{{S}_{T}}}{n-1}\,\!</math><br />
<br />
<br />
When you attempt to fit a regression model to the observations, you are trying to explain some of the variation of the observations using this model. If the regression model is such that the resulting fitted regression line passes through all of the observations, then you would have a "perfect" model (see (a) of the figure below). In this case the model would explain all of the variability of the observations. Therefore, the model sum of squares (also referred to as the regression sum of squares and abbreviated <math>S{{S}_{R}}\,\!</math>) equals the total sum of squares; i.e., the model explains all of the observed variance:<br />
<br />
<br />
::<math>S{{S}_{R}}=S{{S}_{T}}\,\!</math><br />
<br />
<br />
For the perfect model, the regression sum of squares, <math>S{{S}_{R}}\,\!</math>, equals the total sum of squares, <math>S{{S}_{T}}\,\!</math>, because all estimated values, <math>{{\hat{y}}_{i}}\,\!</math>, will equal the corresponding observations, <math>{{y}_{i}}\,\!</math>. <math>S{{S}_{R}}\,\!</math> can be calculated using a relationship similar to the one for obtaining <math>S{{S}_{T}}\,\!</math> by replacing <math>{{y}_{i}}\,\!</math> by <math>{{\hat{y}}_{i}}\,\!</math> in the relationship of <math>S{{S}_{T}}\,\!</math>. Therefore:<br />
<br />
<br />
::<math>S{{S}_{R}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{\hat{y}}_{i}}-\bar{y})}^{2}}\,\!</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{R}}\,\!</math> is 1. <br />
<br />
<br />
Based on the preceding discussion of ANOVA, a perfect regression model exists when the fitted regression line passes through all observed points. However, this is not usually the case, as seen in (b) of the following figure. <br />
<br />
<br />
[[Image:doe4.8.png|center|300px|A perfect regression model will pass through all observed data points as shown in (a). Most models are imperfect and do not fit perfectly to all data points as shown in (b).|link=]]<br />
<br />
<br />
In both of these plots, a number of points do not follow the fitted regression line. This indicates that a part of the total variability of the observed data still remains unexplained. This portion of the total variability or the total sum of squares, that is not explained by the model, is called the ''residual sum of squares'' or the ''error sum of squares'' (abbreviated <math>S{{S}_{E}}\,\!</math>). The deviation for this sum of squares is obtained at each observation in the form of the residuals, <math>{{e}_{i}}\,\!</math>. The error sum of squares can be obtained as the sum of squares of these deviations:<br />
<br />
<br />
::<math>S{{S}_{E}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,e_{i}^{2}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-{{\hat{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{E}}\,\!</math>, <math>dof(S{{S}_{E}})\,\!</math>, is <math>(n-2)\,\!</math>. <br />
The total variability of the observed data (i.e., total sum of squares, <math>S{{S}_{T}}\,\!</math>) can be written using the portion of the variability explained by the model, <math>S{{S}_{R}}\,\!</math>, and the portion unexplained by the model, <math>S{{S}_{E}}\,\!</math>, as:<br />
<br />
<br />
::<math>S{{S}_{T}}=S{{S}_{R}}+S{{S}_{E}}\,\!</math><br />
<br />
<br />
The above equation is also referred to as the analysis of variance identity and can be expanded as follows:<br />
<br />
<br />
::<math>\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{\hat{y}}_{i}}-\bar{y})}^{2}}+\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-{{\hat{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
[[Image:doe4.9.png|center|600px|Scatter plots showing the deviations for the sum of squares used in ANOVA. (a) shows deviations for <math>S S_{T}\,\!</math>, (b) shows deviations for <math>S S_{R}\,\!</math>, and (c) shows deviations for <math>S S_{E}\,\!</math>.|link=]]<br />
<br />
====Mean Squares====<br />
<br />
As mentioned previously, mean squares are obtained by dividing the sum of squares by the respective degrees of freedom. For example, the error mean square, <math>M{{S}_{E}}\,\!</math>, can be obtained as:<br />
<br />
<br />
::<math>M{{S}_{E}}=\frac{S{{S}_{E}}}{dof(S{{S}_{E}})}=\frac{S{{S}_{E}}}{n-2}\,\!</math><br />
<br />
<br />
The error mean square is an estimate of the variance, <math>{{\sigma }^{2}}\,\!</math>, of the random error term, <math>\epsilon\,\!</math>, and can be written as: <br />
<br />
<br />
::<math>{{\hat{\sigma }}^{2}}=\frac{S{{S}_{E}}}{n-2}\,\!</math><br />
<br />
<br />
Similarly, the regression mean square, <math>M{{S}_{R}}\,\!</math>, can be obtained by dividing the regression sum of squares by the respective degrees of freedom as follows:<br />
<br />
<br />
::<math>M{{S}_{R}}=\frac{S{{S}_{R}}}{dof(S{{S}_{R}})}=\frac{S{{S}_{R}}}{1}\,\!</math><br />
<br />
<br />
====F Test====<br />
<br />
To test the hypothesis <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math>, the statistic used is based on the <math>F\,\!</math> distribution. It can be shown that if the null hypothesis <math>{{H}_{0}}\,\!</math> is true, then the statistic:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{M{{S}_{R}}}{M{{S}_{E}}}=\frac{S{{S}_{R}}/1}{S{{S}_{E}}/(n-2)}\,\!</math><br />
<br />
<br />
follows the <math>F\,\!</math> distribution with <math>1\,\!</math> degree of freedom in the numerator and <math>(n-2)\,\!</math> degrees of freedom in the denominator. <math>{{H}_{0}}\,\!</math> is rejected if the calculated statistic, <math>{{F}_{0}}\,\!</math>, is such that:<br />
<br />
<br />
::<math>{{F}_{0}}>{{f}_{\alpha ,1,n-2}}\,\!</math><br />
<br />
<br />
where <math>{{f}_{\alpha ,1,n-2}}\,\!</math> is the percentile of the <math>F\,\!</math> distribution corresponding to a cumulative probability of (<math>1-\alpha\,\!</math>) and <math>\alpha\,\!</math> is the significance level.<br />
<br />
<br />
'''Example'''<br />
<br />
The analysis of variance approach to test the significance of regression can be applied to the yield data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]]. To calculate the statistic, <math>{{F}_{0}}\,\!</math>, for the test, the sum of squares have to be obtained. The sum of squares can be calculated as shown next.<br />
The total sum of squares can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{T}}= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}} \\ <br />
= & \underset{i=1}{\overset{25}{\mathop \sum }}\,{{({{y}_{i}}-166.32)}^{2}} \\ <br />
= & 22979.44 <br />
\end{align}\,\!</math><br />
<br />
<br />
The regression sum of squares can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{R}}= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{{\hat{y}}}_{i}}-\bar{y})}^{2}} \\ <br />
= & \underset{i=1}{\overset{25}{\mathop \sum }}\,{{({{{\hat{y}}}_{i}}-166.32)}^{2}} \\ <br />
= & 22607.81 <br />
\end{align}\,\!</math><br />
<br />
<br />
The error sum of squares can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{E}}= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{y}_{i}}-{{{\hat{y}}}_{i}})}^{2}} \\ <br />
= & \underset{i=1}{\overset{25}{\mathop \sum }}\,{{({{y}_{i}}-{{{\hat{y}}}_{i}})}^{2}} \\ <br />
= & 371.63 <br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing the sum of squares, the statistic to test <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> can be calculated as follows:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}}=& \frac{M{{S}_{R}}}{M{{S}_{E}}} \\ <br />
= & \frac{S{{S}_{R}}/1}{S{{S}_{E}}/(n-2)} \\ <br />
= & \frac{22607.81/1}{371.63/(25-2)} \\ <br />
= & 1399.20 <br />
\end{align}\,\!</math><br />
<br />
<br />
The critical value at a significance level of 0.1 is <math>{{f}_{0.05,1,23}}=2.937\,\!</math>. Since <math>{{f}_{0}}>{{f}_{\alpha ,1,n-2}}\,\!</math>, <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> is rejected and it is concluded that <math>{{\beta }_{1}}\,\!</math> is not zero. Alternatively, the <math>p\,\!</math> value can also be used. The <math>p\,\!</math> value corresponding to the test statistic, <math>{{f}_{0}}\,\!</math>, based on the <math>F\,\!</math> distribution with one degree of freedom in the numerator and 23 degrees of freedom in the denominator is:<br />
<br />
<br />
::<math>\begin{align}<br />
p\text{ }value= & 1-P(F\le {{f}_{0}}) \\ <br />
= & 1-0.999999 \\ <br />
= & 4.17E-22 <br />
\end{align}\,\!</math><br />
<br />
<br />
Assuming that the desired significance is 0.1, since the <math>p\,\!</math> value < 0.1, then <math>{{H}_{0}}:{{\beta }_{1}}=0\,\!</math> is rejected, implying that a relation does exist between temperature and yield for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]]. Using this result along with the scatter plot of the above [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| figure]], it can be concluded that the relationship that exists between temperature and yield is linear. This result is displayed in the ANOVA table as shown in the following figure. Note that this is the same result that was obtained from the <math>t\,\!</math> test in the section [[Simple_Linear_Regression_Analysis#Tests|t Tests]]. The ANOVA and Regression Information tables in Weibull++ DOE folios represent two different ways to test for the significance of the regression model. In the case of multiple linear regression models these tables are expanded to allow tests on individual variables used in the model. This is done using extra sum of squares. Multiple linear regression models and the application of extra sum of squares in the analysis of these models are discussed in [[Multiple_Linear_Regression_Analysis| Multiple Linear Regression Analysis]].<br />
<br />
<br />
[[Image:doe4_10.png|center|747px| ANOVA table for the data.|link=]]<br />
<br />
==Confidence Intervals in Simple Linear Regression==<br />
<br />
A confidence interval represents a closed interval where a certain percentage of the population is likely to lie. For example, a 90% confidence interval with a lower limit of <math>A\,\!</math> and an upper limit of <math>B\,\!</math> implies that 90% of the population lies between the values of <math>A\,\!</math> and <math>B\,\!</math>. Out of the remaining 10% of the population, 5% is less than <math>A\,\!</math> and 5% is greater than <math>B\,\!</math>. (For details refer to the [[Life_Data_Analysis_Reference_Book| Life Data Analysis Reference Book]].) This section discusses confidence intervals used in simple linear regression analysis.<br />
<br />
===Confidence Interval on Regression Coefficients===<br />
<br />
A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on <math>{{\beta }_{1}}\,\!</math> is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{\beta }}_{1}}\pm {{t}_{\alpha /2,n-2}}\cdot se({{\hat{\beta }}_{1}})\,\!</math><br />
<br />
<br />
Similarly, a 100 (<math>1-\alpha\,\!</math>) percent confidence interval on <math>{{\beta }_{0}}\,\!</math> is obtained as:<br />
<br />
<br />
::<math>{{\hat{\beta }}_{0}}\pm {{t}_{\alpha /2,n-2}}\cdot se({{\hat{\beta }}_{0}})\,\!</math><br />
<br />
<br />
===Confidence Interval on Fitted Values===<br />
<br />
A 100 (<math>1-\alpha\,\!</math>) percent confidence interval on any fitted value, <math>{{\hat{y}}_{i}}\,\!</math>, is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{y}}_{i}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ \frac{1}{n}+\frac{{{({{x}_{i}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{i}}-\bar{x})}^{2}}} \right]}\,\!</math><br />
<br />
<br />
It can be seen that the width of the confidence interval depends on the value of <math>{{x}_{i}}\,\!</math> and will be a minimum at <math>{{x}_{i}}=\bar{x}\,\!</math> and will widen as <math>\left| {{x}_{i}}-\bar{x} \right|\,\!</math> increases.<br />
<br />
===Confidence Interval on New Observations===<br />
<br />
For the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]], assume that a new value of the yield is observed after the regression model is fit to the data. This new observation is independent of the observations used to obtain the regression model. If <math>{{x}_{p}}\,\!</math> is the level of the temperature at which the new observation was taken, then the estimate for this new value based on the fitted regression model is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{p}}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{p}} \\<br />
= & 17.0016+1.9952\times {{x}_{p}} \\<br />
\end{align}\,\!</math><br />
<br />
<br />
If a confidence interval needs to be obtained on <math>{{\hat{y}}_{p}}\,\!</math>, then this interval should include both the error from the fitted model and the error associated with future observations. This is because <math>{{\hat{y}}_{p}}\,\!</math> represents the estimate for a value of <math>Y\,\!</math> that was not used to obtain the regression model. The confidence interval on <math>{{\hat{y}}_{p}}\,\!</math> is referred to as the ''prediction interval''. A 100 (<math>1-\alpha\,\!</math>) percent prediction interval on a new observation is obtained as follows:<br />
<br />
<br />
::<math>{{\hat{y}}_{p}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ 1+\frac{1}{n}+\frac{{{({{x}_{p}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{p}}-\bar{x})}^{2}}} \right]}\,\!</math><br />
<br />
<br />
'''Example'''<br />
<br />
To illustrate the calculation of confidence intervals, the 95% confidence intervals on the response at <math>x=93\,\!</math> for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] is obtained in this example. A 95% prediction interval is also obtained assuming that a new observation for the yield was made at <math>x=91\,\!</math>.<br />
<br />
The fitted value, <math>{{\hat{y}}_{i}}\,\!</math>, corresponding to <math>x=93\,\!</math> is:<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{21}}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{21}} \\ <br />
= & 17.0016+1.9952\times 93 \\ <br />
= & 202.6 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% confidence interval <math>(\alpha =0.05)\,\!</math> on the fitted value, <math>{{\hat{y}}_{21}}=202.6\,\!</math>, is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & {{{\hat{y}}}_{i}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ \frac{1}{n}+\frac{{{({{x}_{i}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{i}}-\bar{x})}^{2}}} \right]} \\ <br />
= & 202.6\pm {{t}_{0.025,23}}\sqrt{M{{S}_{E}}\left[ \frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 202.6\pm 2.069\sqrt{16.16\left[ \frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 202.6\pm 2.602 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% limits on <math>{{\hat{y}}_{21}}\,\!</math> are 199.95 and 205.2, respectively.<br />
The estimated value based on the fitted regression model for the new observation at <math>x=91\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
{{{\hat{y}}}_{p}}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{p}} \\ <br />
= & 17.0016+1.9952\times 91 \\ <br />
= & 198.6 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% prediction interval on <math>{{\hat{y}}_{p}}=198.6\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & {{{\hat{y}}}_{p}}\pm {{t}_{\alpha /2,n-2}}\sqrt{{{{\hat{\sigma }}}^{2}}\left[ 1+\frac{1}{n}+\frac{{{({{x}_{p}}-\bar{x})}^{2}}}{\underset{i=1}{\overset{n}{\mathop \sum }}\,{{({{x}_{p}}-\bar{x})}^{2}}} \right]} \\ <br />
= & 198.6\pm {{t}_{0.025,23}}\sqrt{M{{S}_{E}}\left[ 1+\frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 198.6\pm 2.069\sqrt{16.16\left[ 1+\frac{1}{25}+\frac{{{(93-74.84)}^{2}}}{5679.36} \right]} \\ <br />
= & 198.6\pm 2.069\times 4.1889 \\ <br />
= & 198.6\pm 8.67 <br />
\end{align}\,\!</math><br />
<br />
<br />
The 95% limits on <math>{{\hat{y}}_{p}}\,\!</math> are 189.9 and 207.2, respectively. In Weibull++ DOE folios, confidence and prediction intervals can be calculated from the control panel. The prediction interval values calculated in this example are shown in the figure below as Low Prediction Interval and High Prediction Interval, respectively. The columns labeled Mean Predicted and Standard Error represent the values of <math>{{\hat{y}}_{p}}\,\!</math> and the standard error used in the calculations. <br />
<br />
<br />
[[Image:doe4_11.png|center|786px|Calculation of prediction intervals in Weibull++.|link=]]<br />
<br />
==Measures of Model Adequacy==<br />
<br />
It is important to analyze the regression model before inferences based on the model are undertaken. The following sections present some techniques that can be used to check the appropriateness of the model for the given data. These techniques help to determine if any of the model assumptions have been violated.<br />
<br />
===Coefficient of Determination (<math>R^2 </math>)===<br />
The coefficient of determination is a measure of the amount of variability in the data accounted for by the regression model. As mentioned previously, the total variability of the data is measured by the total sum of squares, <math>SS_T\,\!</math>. The amount of this variability explained by the regression model is the regression sum of squares, <math>SS_R\,\!</math>. The coefficient of determination is the ratio of the regression sum of squares to the total sum of squares.<br />
<br />
<br />
::<math>R^2 = \frac{SS_R}{SS_T}\,\!</math><br />
<br />
<br />
<math>R^2\,\!</math> can take on values between 0 and 1 since <math>R^2 = \frac{SS_R}{SS_T}\,\!</math> . For the yield data example, <math>R^2\,\!</math> can be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
{{R}^{2}}= & \frac{S{{S}_{R}}}{S{{S}_{T}}} \\ <br />
= & \frac{22607.81}{22979.44} \\ <br />
= & 0.98 <br />
\end{align}\,\!</math><br />
<br />
<br />
<br />
Therefore, 98% of the variability in the yield data is explained by the regression model, indicating a very good fit of the model. It may appear that larger values of <math>{{R}^{2}}\,\!</math> indicate a better fitting regression model. However, <math>{{R}^{2}}\,\!</math> should be used cautiously as this is not always the case. The value of <math>{{R}^{2}}\,\!</math> increases as more terms are added to the model, even if the new term does not contribute significantly to the model. Therefore, an increase in the value of <math>{{R}^{2}}\,\!</math> cannot be taken as a sign to conclude that the new model is superior to the older model. Adding a new term may make the regression model worse if the error mean square, <math>M{{S}_{E}}\,\!</math>, for the new model is larger than the <math>M{{S}_{E}}\,\!</math> of the older model, even though the new model will show an increased value of <math>{{R}^{2}}\,\!</math>. In the results obtained from the DOE folio, <math>{{R}^{2}}\,\!</math> is displayed as R-sq under the ANOVA table (as shown in the figure below), which displays the complete analysis sheet for the data in the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]].<br />
<br />
The other values displayed with are S, R-sq(adj), PRESS and R-sq(pred). These values measure different aspects of the adequacy of the regression model. For example, the value of S is the square root of the error mean square, <math>MS_E\,\!</math>, and represents the "standard error of the model." A lower value of S indicates a better fitting model. The values of S, R-sq and R-sq(adj) indicate how well the model fits the observed data. The values of PRESS and R-sq(pred) are indicators of how well the regression model predicts new observations. R-sq(adj), PRESS and R-sq(pred) are explained in [[Multiple Linear Regression Analysis]].<br />
<br />
<br />
[[Image:doe4_12.png|center|874px|Complete analysis for the data.|link=]]<br />
<br />
===Residual Analysis===<br />
In the simple linear regression model the true error terms, <math>{{\epsilon }_{i}}\,\!</math>, are never known. The residuals, <math>{{e}_{i}}\,\!</math>, may be thought of as the observed error terms that are similar to the true error terms. Since the true error terms, <math>{{\epsilon }_{i}}\,\!</math>, are assumed to be normally distributed with a mean of zero and a variance of <math>{{\sigma }^{2}}\,\!</math>, in a good model the observed error terms (i.e., the residuals, <math>{{e}_{i}}\,\!</math>) should also follow these assumptions. Thus the residuals in the simple linear regression should be normally distributed with a mean of zero and a constant variance of <math>{{\sigma }^{2}}\,\!</math>. Residuals are usually plotted against the fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, against the predictor variable values, <math>{{x}_{i}}\,\!</math>, and against time or run-order sequence, in addition to the normal probability plot. Plots of residuals are used to check for the following:<br />
<br />
<br />
:1. Residuals follow the normal distribution. <br />
:2. Residuals have a constant variance. <br />
:3. Regression function is linear. <br />
:4. A pattern does not exist when residuals are plotted in a time or run-order sequence. <br />
:5. There are no outliers. <br />
<br />
<br />
Examples of residual plots are shown in the following figure. (a) is a satisfactory plot with the residuals falling in a horizontal band with no systematic pattern. Such a plot indicates an appropriate regression model. (b) shows residuals falling in a funnel shape. Such a plot indicates increase in variance of residuals and the assumption of constant variance is violated here. Transformation on <math>Y\,\!</math> may be helpful in this case (see [[Simple_Linear_Regression_Analysis#Transformations| Transformations]]). If the residuals follow the pattern of (c) or (d), then this is an indication that the linear regression model is not adequate. Addition of higher order terms to the regression model or transformation on <math>x\,\!</math> or <math>Y\,\!</math> may be required in such cases. A plot of residuals may also show a pattern as seen in (e), indicating that the residuals increase (or decrease) as the run order sequence or time progresses. This may be due to factors such as operator-learning or instrument-creep and should be investigated further. <br />
<br />
<br />
[[Image:doe4.13.png|center|550px|Possible residual plots (against fitted values, time or run-order) that can be obtained from simple linear regression analysis.|link=]] <br />
<br />
<br />
'''Example'''<br />
<br />
Residual plots for the data of the preceding [[Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]] are shown in the following figures. One of the following figures is the normal probability plot. It can be observed that the residuals follow the normal distribution and the assumption of normality is valid here. In one of the following figures the residuals are plotted against the fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, and in one of the following figures the residuals are plotted against the run order. Both of these plots show that the 21st observation seems to be an outlier. Further investigations are needed to study the cause of this outlier. <br />
<br />
<br />
[[Image:doe4_14.png|center|650px|Normal probability plot of residuals for the data.|link=]]<br />
<br />
<br />
[[Image:doe4_15.png|center|650px|Plot of residuals against fitted values for the data.|link=]]<br />
<br />
<br />
[[Image:doe4_16.png|center|650px|Plot of residuals against run order for the data.|link=]]<br />
<br />
===Lack-of-Fit Test===<br />
<br />
As mentioned in [[Simple_Linear_Regression_Analysis#Analysis_of_Variance_Approach_to_Test_the_Significance_of_Regression| Analysis of Variance Approach]], ANOVA, a perfect regression model results in a fitted line that passes exactly through all observed data points. This perfect model will give us a zero error sum of squares (<math>S{{S}_{E}}=0\,\!</math>). Thus, no error exists for the perfect model. However, if you record the response values for the same values of <math>{{x}_{i}}\,\!</math> for a second time, in conditions maintained as strictly identical as possible to the first time, observations from the second time will not all fall along the perfect model. The deviations in observations recorded for the second time constitute the "purely" random variation or noise. The sum of squares due to pure error (abbreviated <math>S{{S}_{PE}}\,\!</math>) quantifies these variations. <math>S{{S}_{PE}}\,\!</math> is calculated by taking repeated observations at some or all values of <math>{{x}_{i}}\,\!</math> and adding up the square of deviations at each level of <math>x\,\!</math> using the respective repeated observations at that <math>x\,\!</math> value. <br />
<br />
Assume that there are <math>n\,\!</math> levels of <math>x\,\!</math> and <math>{{m}_{i}}\,\!</math> repeated observations are taken at each <math>i\,\!</math> the level. The data is collected as shown next:<br />
<br />
<br />
::<math>\begin{align}<br />
& {{y}_{11}},{{y}_{12}},....,{{y}_{1{{m}_{1}}}}\text{ repeated observations at }{{x}_{1}} \\ <br />
& {{y}_{21}},{{y}_{22}},....,{{y}_{2{{m}_{2}}}}\text{ repeated observations at }{{x}_{2}} \\ <br />
& ... \\ <br />
& {{y}_{i1}},{{y}_{i2}},....,{{y}_{i{{m}_{i}}}}\text{ repeated observations at }{{x}_{i}} \\ <br />
& ... \\ <br />
& {{y}_{n1}},{{y}_{n2}},....,{{y}_{n{{m}_{n}}}}\text{ repeated observations at }{{x}_{n}} <br />
\end{align}\,\!</math><br />
<br />
<br />
The sum of squares of the deviations from the mean of the observations at <math>i\,\!</math> the level of <math>x\,\!</math>, <math>{{x}_{i}}\,\!</math>, can be calculated as:<br />
<br />
<br />
::<math>\underset{j=1}{\overset{{{m}_{i}}}{\mathop \sum }}\,{{({{y}_{ij}}-{{\bar{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
where <math>{{\bar{y}}_{i}}\,\!</math> is the mean of the <math>{{m}_{i}}\,\!</math> repeated observations corresponding to <math>{{x}_{i}}\,\!</math> (<math>{{\bar{y}}_{i}}=(1/{{m}_{i}})\mathop{}_{j=1}^{{{m}_{i}}}{{y}_{ij}}\,\!</math>). The number of degrees of freedom for these deviations is (<math>{{m}_{i}}-1\,\!</math> ) as there are <math>{{m}_{i}}\,\!</math> observations at <math>i\,\!</math> the level of <math>x\,\!</math> but one degree of freedom is lost in calculating the mean, <math>{{\bar{y}}_{i}}\,\!</math>.<br />
<br />
The total sum of square deviations (or <math>S{{S}_{PE}}\,\!</math>) for all levels of <math>x\,\!</math> can be obtained by summing the deviations for all <math>{{x}_{i}}\,\!</math> as shown next:<br />
<br />
<br />
::<math>S{{S}_{PE}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,\underset{j=1}{\overset{{{m}_{i}}}{\mathop \sum }}\,{{({{y}_{ij}}-{{\bar{y}}_{i}})}^{2}}\,\!</math><br />
<br />
<br />
The total number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & \underset{i=1}{\overset{n}{\mathop \sum }}\,({{m}_{i}}-1) \\ <br />
= & \underset{i=1}{\overset{n}{\mathop \sum }}\,{{m}_{i}}-n <br />
\end{align}\,\!</math><br />
<br />
<br />
If all <math>{{m}_{i}}=m\,\!</math>, (i.e., <math>m\,\!</math> repeated observations are taken at all levels of <math>x\,\!</math>), then <math>\mathop{}_{i=1}^{n}{{m}_{i}}=nm\,\!</math> and the degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> are: <br />
<br />
<br />
::<math>=nm-n\,\!</math><br />
<br />
<br />
The corresponding mean square in this case will be:<br />
<br />
<br />
::<math>M{{S}_{PE}}=\frac{S{{S}_{PE}}}{nm-n}\,\!</math><br />
<br />
<br />
When repeated observations are used for a perfect regression model, the sum of squares due to pure error, <math>S{{S}_{PE}}\,\!</math>, is also considered as the error sum of squares, <math>S{{S}_{E}}\,\!</math>. For the case when repeated observations are used with imperfect regression models, there are two components of the error sum of squares, <math>S{{S}_{E}}\,\!</math>. One portion is the pure error due to the repeated observations. The other portion is the error that represents variation not captured because of the imperfect model. The second portion is termed as the sum of squares due to lack-of-fit (abbreviated <math>S{{S}_{LOF}}\,\!</math>) to point to the deficiency in fit due to departure from the perfect-fit model. Thus, for an imperfect regression model:<br />
<br />
<br />
::<math>S{{S}_{E}}=S{{S}_{PE}}+S{{S}_{LOF}}\,\!</math><br />
<br />
<br />
Knowing <math>S{{S}_{E}}\,\!</math> and <math>S{{S}_{PE}}\,\!</math>, the previous equation can be used to obtain <math>S{{S}_{LOF}}\,\!</math>:<br />
<br />
<br />
::<math>S{{S}_{LOF}}=S{{S}_{E}}-S{{S}_{PE}}\,\!</math><br />
<br />
<br />
The degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> can be obtained in a similar manner using subtraction. For the case when <math>m\,\!</math> repeated observations are taken at all levels of <math>x\,\!</math>, the number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> is:<br />
<br />
<br />
::<math>dof(S{{S}_{PE}})=nm-n\,\!</math><br />
<br />
<br />
Since there are <math>nm\,\!</math> total observations, the number of degrees of freedom associated with <math>S{{S}_{E}}\,\!</math> is:<br />
<br />
<br />
::<math>dof(S{{S}_{E}})=nm-2\,\!</math><br />
<br />
<br />
Therefore, the number of degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
= & dof(S{{S}_{E}})-dof(S{{S}_{PE}}) \\ <br />
= & (nm-2)-(nm-n) \\ <br />
= & n-2 <br />
\end{align}\,\!</math><br />
<br />
<br />
The corresponding mean square, <math>M{{S}_{LOF}}\,\!</math>, can now be obtained as:<br />
<br />
<br />
::<math>M{{S}_{LOF}}=\frac{S{{S}_{LOF}}}{n-2}\,\!</math><br />
<br />
<br />
The magnitude of <math>S{{S}_{LOF}}\,\!</math> or <math>M{{S}_{LOF}}\,\!</math> will provide an indication of how far the regression model is from the perfect model. An <math>F\,\!</math> test exists to examine the lack-of-fit at a particular significance level. The quantity <math>M{{S}_{LOF}}/M{{S}_{PE}}\,\!</math> follows an <math>F\,\!</math> distribution with <math>(n-2)\,\!</math> degrees of freedom in the numerator and <math>(nm-n)\,\!</math> degrees of freedom in the denominator when all <math>{{m}_{i}}\,\!</math> equal <math>m\,\!</math>. The test statistic for the lack-of-fit test is:<br />
<br />
<br />
::<math>{{F}_{0}}=\frac{M{{S}_{LOF}}}{M{{S}_{PE}}}\,\!</math><br />
<br />
<br />
If the critical value <math>{{f}_{\alpha ,n-2,mn-n}}\,\!</math> is such that:<br />
<br />
<br />
::<math>{{F}_{0}}>{{f}_{\alpha ,n-2,nm-n}}\,\!</math><br />
<br />
<br />
it will lead to the rejection of the hypothesis that the model adequately fits the data.<br />
<br />
<br />
'''Example'''<br />
<br />
Assume that a second set of observations are taken for the yield data of the preceding [http://reliawiki.org/index.php/Simple_Linear_Regression_Analysis#Simple_Linear_Regression_Analysis| table]. The resulting observations are recorded in the following table. To conduct a lack-of-fit test on this data, the statistic <math>{{F}_{0}}=M{{S}_{LOF}}/M{{S}_{PE}}\,\!</math>, can be calculated as shown next.<br />
<br />
[[Image:doet4.2.png|center|436px|Yield data from the first and second observation sets for the chemical process example in the Introduction.|link=]] <br />
<br />
<br />
'''Calculation of Least Square Estimates'''<br />
<br />
<br />
The parameters of the fitted regression model can be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
<br />
{{{\hat{\beta }}}_{1}} = & \frac{\underset{i=1}{\overset{50}{\mathop \sum }}\,{{y}_{i}}{{x}_{i}}-\frac{\left( \underset{i=1}{\overset{50}{\mathop \sum }}\,{{y}_{i}} \right)\left( \underset{i=1}{\overset{50}{\mathop \sum }}\,{{x}_{i}} \right)}{50}}{\underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{x}_{i}}-\bar{x})}^{2}}} \\ <br />
= & \frac{648532-\frac{8356\times 3742}{50}}{11358.72} \\ <br />
= & 2.04 \end{align}\,\!</math><br />
<br />
<br />
::<math>\begin{align} <br />
{{{\hat{\beta }}}_{0}}= & \bar{y}-{{{\hat{\beta }}}_{1}}\bar{x} \\ <br />
= & 167.12-2.04\times 74.84 \\ <br />
= & 14.47 <br />
<br />
\end{align}\,\!</math><br />
<br />
<br />
Knowing <math>{{\hat{\beta }}_{1}}\,\!</math> and <math>{{\hat{\beta }}_{0}}\,\!</math>, the fitted values, <math>{{\hat{y}}_{i}}\,\!</math>, can be calculated.<br />
<br />
<br />
'''Calculation of the Sum of Squares'''<br />
<br />
Using the fitted values, the sum of squares can be obtained as follows:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{T}} = & \underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{y}_{i}}-\bar{y})}^{2}} \\ <br />
= & 47907.28 \end{align}\,\!</math><br />
<br />
<br />
::<math>\begin{align} <br />
S{{S}_{R}} = & \underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{{\hat{y}}}_{i}}-\bar{y})}^{2}} \\ <br />
= & 47258.91 \end{align}<br />
\,\!</math><br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{E}} = & \underset{i=1}{\overset{50}{\mathop \sum }}\,{{({{y}_{i}}-{{{\hat{y}}}_{i}})}^{2}} \\ <br />
= & 648.37 \end{align}<br />
\,\!</math><br />
<br />
<br />
'''Calculation of <math>M{{S}_{LOF}}\,\!</math>'''<br />
<br />
<br />
The error sum of squares, <math>S{{S}_{E}}\,\!</math>, can now be split into the sum of squares due to pure error, <math>S{{S}_{PE}}\,\!</math>, and the sum of squares due to lack-of-fit, <math>S{{S}_{LOF}}\,\!</math>. <math>S{{S}_{PE}}\,\!</math> can be calculated as follows considering that in this example <math>n=25\,\!</math> and <math>m=2\,\!</math>:<br />
<br />
<br />
::<math><br />
<br />
\begin{align}<br />
S{{S}_{PE}} & = \underset{i=1}{\overset{n}{\mathop \sum }}\,\underset{j=1}{\overset{{{m}_{i}}}{\mathop \sum }}\,{{({{y}_{ij}}-{{{\bar{y}}}_{i}})}^{2}} \\ <br />
& = \underset{i=1}{\overset{25}{\mathop \sum }}\,\underset{j=1}{\overset{2}{\mathop \sum }}\,{{({{y}_{ij}}-{{{\bar{y}}}_{i}})}^{2}} \\ <br />
& = 350 <br />
\end{align}\,\!<br />
<br />
</math><br />
<br />
<br />
The number of degrees of freedom associated with <math>S{{S}_{PE}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
dof(S{{S}_{PE}}) & = nm-n \\ <br />
& = 25\times 2-25 \\ <br />
& = 25 <br />
\end{align}\,\!</math><br />
<br />
<br />
The corresponding mean square, <math>M{{S}_{PE}}\,\!</math>, can now be obtained as:<br />
<br />
<br />
::<math>\begin{align}<br />
M{{S}_{PE}} & = \frac{S{{S}_{PE}}}{dof(S{{S}_{PE}})} \\ <br />
& = \frac{350}{25} \\ <br />
& = 14 <br />
\end{align}\,\!</math><br />
<br />
<br />
<math>S{{S}_{LOF}}\,\!</math> can be obtained by subtraction from <math>S{{S}_{E}}\,\!</math> as:<br />
<br />
<br />
::<math>\begin{align}<br />
S{{S}_{LOF}} & = S{{S}_{E}}-S{{S}_{PE}} \\ <br />
& = 648.37-350 \\ <br />
& = 298.37 <br />
\end{align}\,\!</math><br />
<br />
<br />
Similarly, the number of degrees of freedom associated with <math>S{{S}_{LOF}}\,\!</math> is:<br />
<br />
<br />
::<math>\begin{align}<br />
dof(S{{S}_{LOF}}) & = dof(S{{S}_{E}})-dof(S{{S}_{PE}}) \\ <br />
& = (nm-2)-(nm-n) \\ <br />
& = 23 <br />
\end{align}\,\!</math><br />
<br />
<br />
The lack-of-fit mean square is:<br />
<br />
<br />
::<math>\begin{align}<br />
M{{S}_{LOF}} & = \frac{M{{S}_{LOF}}}{dof(M{{S}_{LOF}})} \\ <br />
& = \frac{298.37}{23} \\ <br />
& = 12.97 <br />
\end{align}\,\!</math><br />
<br />
<br />
'''Calculation of the Test Statistic'''<br />
<br />
<br />
The test statistic for the lack-of-fit test can now be calculated as:<br />
<br />
<br />
::<math>\begin{align}<br />
{{f}_{0}} & = \frac{M{{S}_{LOF}}}{M{{S}_{PE}}} \\ <br />
& = \frac{12.97}{14} \\ <br />
& = 0.93 <br />
\end{align}\,\!</math><br />
<br />
<br />
The critical value for this test is:<br />
<br />
<br />
::<math>{{f}_{0.05,23,25}}=1.97\,\!</math><br />
<br />
<br />
Since <math>{{f}_{0}}<{{f}_{0.05,23,25}}\,\!</math>, we fail to reject the hypothesis that the model adequately fits the data. The <math>p\,\!</math> value for this case is:<br />
<br />
<br />
::<math>\begin{align}<br />
p\text{ }value & = 1-P(F\le {{f}_{0}}) \\ <br />
& = 1-0.43 \\ <br />
& = 0.57 <br />
\end{align}\,\!</math><br />
<br />
<br />
Therefore, at a significance level of 0.05 we conclude that the simple linear regression model, <math>y=14.47+2.04x\,\!</math>, is adequate for the observed data. The following table presents a summary of the ANOVA calculations for the lack-of-fit test.<br />
<br />
<br />
[[Image:doe4.18.png|center|700px|ANOVA table for the lack-of-fit test of the yield data example.]]<br />
<br />
==Transformations==<br />
The linear regression model may not be directly applicable to certain data. Non-linearity may be detected from scatter plots or may be known through the underlying theory of the product or process or from past experience. Transformations on either the predictor variable, <math>x\,\!</math>, or the response variable, <math>Y\,\!</math>, may often be sufficient to make the linear regression model appropriate for the transformed data.<br />
If it is known that the data follows the logarithmic distribution, then a logarithmic transformation on <math>Y\,\!</math> (i.e., <math>{{Y}^{*}}=\log (Y)\,\!</math>) might be useful. For data following the Poisson distribution, a square root transformation (<math>{{Y}^{*}}=\sqrt{Y}\,\!</math>) is generally applicable.<br />
<br />
Transformations on <math>Y\,\!</math> may also be applied based on the type of scatter plot obtained from the data. The following figure shows a few such examples. <br />
<br />
<br />
[[Image:doe4.17.png|center|500px|Transformations on for a few possible scatter plots. Plot (a) may require a square root transformation, (b) may require a logarithmic transformation and (c) may require a reciprocal transformation.|link=]]<br />
<br />
<br />
For the scatter plot labeled (a), a square root transformation (<math>{{Y}^{*}}=\sqrt{Y}\,\!</math>) is applicable. While for the plot labeled (b), a logarithmic transformation (i.e., <math>{{Y}^{*}}=\log (Y)\,\!</math>) may be applied. For the plot labeled (c), the reciprocal transformation (<math>{{Y}^{*}}=1/Y\,\!</math>) is applicable. At times it may be helpful to introduce a constant into the transformation of <math>Y\,\!</math>. For example, if <math>Y\,\!</math> is negative and the logarithmic transformation on <math>Y</math> seems applicable, a suitable constant, <math>k\,\!</math>, may be chosen to make all observed <math>Y\,\!</math> positive. Thus the transformation in this case would be <math>{{Y}^{*}}=\log (k+Y)\,\!</math> .<br />
<br />
The Box-Cox method may also be used to automatically identify a suitable power transformation for the data based on the relation:<br />
<br />
<br />
::<math>{{Y}^{*}}={{Y}^{\lambda }}\,\!</math><br />
<br />
<br />
Here the parameter <math>\lambda\,\!</math> is determined using the given data such that <math>S{{S}_{E}}\,\!</math> is minimized (details on this method are presented in [[One Factor Designs]]).</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:Doebook&diff=65365Template:Doebook2018-08-09T22:18:05Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(229,178,27); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:DOEbox.png|100px|link=Experiment Design and Analysis Reference]] <br>{{font|[[Experiment Design and Analysis Reference|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:synthesis-icon.png|link=https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR|left]]<p style="text-align: left;">'''Available Software:''' <br>[https://koi-3QN72QORVC.marketingautomation.services/net/m?md=Rw01CJDOxn%2FabhkPlZsy6DwBQ%2BaCXsGR Weibull++]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Experiment Design & Analysis (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook File] may be more up-to-date</p><br />
<br />
|}<br />
</div> <br />
<includeonly></includeonly><noinclude>{{Template:Doebook/documentation}}</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Main_Page&diff=65286Main Page2017-12-07T18:10:36Z<p>Chuck Smith: </p>
<hr />
<div>{{DISPLAYTITLE:ReliaWiki}} __NOTOC__ __NOEDITSECTION__ <br />
<div style="position:relative; float:left; display:block; width:100%; margin:10px;"><br />
ReliaWiki is owned and maintained by [https://www.hbmprenscia.com HBM Prenscia] and is an extension of &nbsp;[http://www.weibull.com weibull.com]. <!--For additional resources, visit [http://www.reliasoft.tv ReliaSoft.tv], [http://www.reliability-discussion.com/ Reliability Discussion Forum] and the [http://www.reliabilityprofessional.org/ Certified Reliability Professional (CRP) Program]. --><br />
</div><br />
<br />
<div style="position:relative; float:left; width:100%;"><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=blue_triangle.png<br />
|title=Life Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Life_Data_Analysis_Reference_Book|text=Reference Book}}<br />
{{TitleBoxLink|link=Weibull++_Examples|text=Weibull++ Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=green_triangle.png<br />
|title=System Analysis (RBDs and Fault Trees)<br />
|links=<br />
{{TitleBoxLink|link=System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=BlockSim_Examples|text=BlockSim Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=red_triangle.png<br />
|title=Reliability Growth and Repairable System Analysis<br />
|links=<br />
{{TitleBoxLink|link=Reliability_Growth_and_Repairable_System_Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=RGA_Examples|text=RGA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=rcm_triangle.png<br />
|title=Reliability Centered Maintenance (RCM)<br />
|links=<br />
{{TitleBoxLink|link=RCM%2B%2B_Examples|text=RCM++ Software Examples}}<br />
}}<br />
</div><br />
<br />
<div style="position:relative; float:left; width:49%; margin:5px;"><br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=yellow_triangle.png<br />
|title=Accelerated Life Testing Data Analysis<br />
|links=<br />
{{TitleBoxLink|link=Accelerated Life Testing Data Analysis_Reference|text=Reference Book}}<br />
{{TitleBoxLink|link=ALTA_Examples|text=ALTA Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=doe_triangle.png<br />
|title=Experiment Design and Analysis (DOE)<br />
|links=<br />
{{TitleBoxLink|link=Experiment_Design_and_Analysis_Reference|text=Reference Book}}<br />
<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=fmea_triangle.png<br />
|title=Failure Modes &amp; Effects Analysis (FMEA)<br />
|links=<br />
{{TitleBoxLink|link=FMEA_and_RCM_Articles|text=Articles}}<br />
{{TitleBoxLink|link=Xfmea_Examples|text=Xfmea Software Examples}}<br />
}}<br />
<br />
{{TitleBox<br />
|bgcolor=#C8D4E0<br />
|image=api_triangle.png<br />
|title=Synthesis API<br />
|links=<br />
{{TitleBoxLink|link=Synthesis API Reference|text=API Reference}}<br />
{{TitleBoxLink|link=API_Changelog|text=API Changelog}}<br />
}}<br />
</div><br />
</div><br />
<div style="position:relative; float:left; width:100%;"><br />
<br><br><cshow logged="1">This text will appear if a user with membership to 'sysop' group views this page</cshow> {{ReliaSoft Footer}}<br />
</div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:Doebook&diff=50924Template:Doebook2014-02-18T21:11:52Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(229,178,27); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:DOEbox.png|100px|link=Experiment Design and Analysis Reference]] <br>{{font|[[Experiment Design and Analysis Reference|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Experiment Design & Analysis (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
[[Category:Experiment Design and Analysis Reference]] [[Category:Reliability_Engineering_Textbooks]]<br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the DOE++ book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:Doebook|XX}}</pre><br />
{{Template:Doebook|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:ALTABOOK&diff=50923Template:ALTABOOK2014-02-18T21:10:51Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(250,182,22); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:ALTAbox.png|100px|link=Accelerated_Life_Testing_Data_Analysis_Reference]] <br>{{font|[[Accelerated_Life_Testing_Data_Analysis_Reference|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:Examples_icon.png|link=ALTA_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[ALTA Examples|ALTA Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf Accelerated Life Testing (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/Accelerated_Life_Testing_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/Accelerated_Life_Testing_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
[[Category:Accelerated_Life_Testing_Data_Analysis_Reference]] [[Category:Reliability_Engineering_Textbooks]]<br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the Weibull++ book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:ALTABOOK|XX}}</pre><br />
{{Template:ALTABOOK|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:Bsbook&diff=50922Template:Bsbook2014-02-18T21:10:12Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(114,159,113); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:BlockSimbox.png|100px|link=System_Analysis_Reference]] <br>{{font|[[System Analysis Reference|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:Examples_icon.png|link=BlockSim_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[BlockSim Examples|BlockSim Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf System Analysis (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/System_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/System_Analysis_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
[[Category:System Analysis Reference]] [[Category:Reliability_Engineering_Textbooks]]<br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the BlockSim book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:Bsbook|XX}}</pre><br />
{{Template:Bsbook|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:ALTABOOK&diff=50921Template:ALTABOOK2014-02-18T21:09:08Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(250,182,22); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:ALTAbox.png|100px|link=Accelerated_Life_Testing_Data_Analysis_Reference]] <br>{{font|[[Accelerated_Life_Testing_Data_Analysis_Reference|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:Examples_icon.png|link=ALTA_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[ALTA Examples|ALTA Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf Accelerated Life Testing (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
[[Category:Accelerated_Life_Testing_Data_Analysis_Reference]] [[Category:Reliability_Engineering_Textbooks]]<br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the Weibull++ book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:ALTABOOK|XX}}</pre><br />
{{Template:ALTABOOK|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:LDABOOK&diff=50920Template:LDABOOK2014-02-18T21:07:48Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(42,145,198); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:Weibullbox.png|100px|link=Life_Data_Analysis_Reference_Book]] <br>{{font|[[Life_Data_Analysis_Reference_Book|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:Examples_icon.png|link=Weibull++_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[Weibull++ Examples|Weibull++ Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf Life Data Analysis (*.pdf)]</p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
[[Category:Life_Data_Analysis_Reference]] [[Category:Reliability_Engineering_Textbooks]]<br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the Weibull++ book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:LDABOOK|XX}}</pre><br />
{{Template:LDABOOK|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&diff=50919Experiment Design and Analysis Reference2014-02-18T21:06:57Z<p>Chuck Smith: </p>
<hr />
<div>{{Allbooksindex}}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="1"<br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#E5B21B"| <font color="#ffffff" size="3">ReliaSoft's Experiment Design and Analysis Reference</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#E5B21B" | <font color="#ffffff" size="4">Chapter Index</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" | <br />
#[[DOE Overview]]<br />
#[[Statistical Background on DOE]]<br />
#[[Simple Linear Regression Analysis]]<br />
#[[Multiple Linear Regression Analysis]]<br />
#[[One Factor Designs]]<br />
#[[General Full Factorial Designs]]<br />
#[[Randomization and Blocking in DOE]]<br />
#[[Two Level Factorial Experiments]]<br />
#[[Highly Fractional Factorial Designs]]<br />
#[[Response Surface Methods for Optimization]]<br />
#[[Design Evaluation and Power Study]]<br />
#[[Optimal Custom Designs]]<br />
#[[Robust Parameter Design]]<br />
#[[Reliability DOE for Life Tests]]<br />
#[[Measurement System Analysis]]<br />
#Appendices <br />
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]<br />
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]<br />
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]<br />
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi's Orthogonal Arrays]]<br />
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi's Orthogonal Arrays]]<br />
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]<br />
#*[[DOE Glossary|Appendix G: Glossary]]<br />
#*[[DOE References|Appendix H: References]]<br />
|}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="0" <br />
|-<br />
| align="center" valign="middle" bgcolor="#dddddd"; | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]<p st#le="text-align: left;">[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-<br>[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)<br />
|}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=System_Analysis_Reference&diff=50918System Analysis Reference2014-02-18T21:06:38Z<p>Chuck Smith: </p>
<hr />
<div>{{Allbooksindex}}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="1"<br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#729F71"| <font color="#ffffff" size="3">ReliaSoft's System Analysis Reference</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#729F71" | <font color="#ffffff" size="4">Chapter Index</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" | <br />
#[[Basics of System Reliability Analysis]]<br />
#[[Statistical Background]]<br />
#[[RBDs and Analytical System Reliability]]<br />
#[[Time-Dependent System Reliability (Analytical)]]<br />
#[[Reliability Importance and Optimized Reliability Allocation (Analytical)]]<br />
#[[Introduction to Repairable Systems]]<br />
#[[Repairable Systems Analysis Through Simulation]]<br />
#[[Additional Analyses|Additional Simulation Analyses: Throughput and Life Cycle Cost Analysis]]<br />
#[[Fault Tree Diagrams and System Analysis]]<br />
#[[Reliability Phase Diagrams (RPDs)]]<br />
#Appendices<br />
##[[Appendix A: Generating Random Numbers from a Distribution]]<br />
##[[Appendix B: References]]<br />
|}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="0" <br />
|-<br />
| align="center" valign="middle" bgcolor="#dddddd"; | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf|left|50px]]<p st#le="text-align: left;">[http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-<br>[http://reliawiki.org/index.php/ReliaWiki:Books/System_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)<br />
|}<br />
<br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
| style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
<br> {{Allbooksindex footer|BlockSim Examples|BlockSim}}<br />
[[Image:BlockSim Examples Banner.png|link=BlockSim Examples|center|300px]] <br />
|}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Accelerated_Life_Testing_Data_Analysis_Reference&diff=50917Accelerated Life Testing Data Analysis Reference2014-02-18T21:06:19Z<p>Chuck Smith: </p>
<hr />
<div>{{Allbooksindex}} <br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="1"<br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#fab616"| <font color="#ffffff" size="3">ReliaSoft's Accelerated Life Testing Reference</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#fab616" | <font color="#ffffff" size="4">Chapter Index</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" | <br />
<ol><br />
<li>[[Introduction to Accelerated Life Testing]]</li><br />
<li>[[Accelerated Life Testing and ALTA]]</li><br />
<li>[[Distributions Used in Accelerated Testing]]</li><br />
<li>[[Arrhenius Relationship]]</li><br />
<li>[[Eyring Relationship]]</li><br />
<li>[[Inverse Power Law Relationship]]</li><br />
<li>[[Temperature-Humidity Relationship]]</li><br />
<li>[[Temperature-NonThermal Relationship]]</li><br />
<li>[[Multivariable Relationships: General Log-Linear and Proportional Hazards]]</li><br />
<li>[[Time-Varying Stress Models]]</li><br />
<li>Additional Tools<br />
*[[Additional Tools|Likelihood Ratio Test, Tests of Comparison and Degradation Analysis]]<br />
*[[Accelerated Life Test Plans]]</li><br />
<li>Appendices<br />
*[[Appendix A: Brief Statistical Background]]<br />
*[[Appendix B: Parameter Estimation]]<br />
*[[Appendix C: Benchmark Examples]]<br />
*[[Appendix D: Confidence Bounds]]<br />
*[[Appendix E: References]]</li><br />
</ol><br />
|}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="0" <br />
|-<br />
| align="center" valign="middle" bgcolor="#dddddd"; | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf|left|50px]]<p st#le="text-align: left;">[http://www.synthesisplatform.net/references/Accelerated_Life_Testing_Reference.pdf Download this book as a print-ready *.pdf] -or-<br>[http://reliawiki.org/index.php/ReliaWiki:Books/Accelerated_Life_Testing_Reference_eBook Generate your own file] (may be more up-to-date)<br />
|}<br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
| style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
<br>{{Allbooksindex footer|ALTA Examples|ALTA}}<br />
[[Image:ALTA_Examples_Banner.png|link=ALTA_Examples|center|300px]] <br />
|}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Life_Data_Analysis_Reference_Book&diff=50916Life Data Analysis Reference Book2014-02-18T21:05:57Z<p>Chuck Smith: </p>
<hr />
<div>{{Allbooksindex}} <br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="1"<br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#2a91c6"| <font color="#ffffff" size="3">ReliaSoft's Life Data Analysis Reference</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#2a91c6" | <font color="#ffffff" size="4">Chapter Index</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" | <br />
#[[Introduction to Life Data Analysis]] <br />
#[[Basic Statistical Background]] <br />
#[[Life Distributions]] <br />
#[[Parameter Estimation]] <br />
#[[Life Data Classification]] <br />
#[[Confidence Bounds]] <br />
#[[The Exponential Distribution]] <br />
#[[The Weibull Distribution]] <br />
#[[The Normal Distribution]] <br />
#[[The Lognormal Distribution]] <br />
#[[The Mixed Weibull Distribution]] <br />
#[[The Generalized Gamma Distribution]] <br />
#[[The Gamma Distribution]] <br />
#[[The Logistic Distribution]] <br />
#[[The Loglogistic Distribution]] <br />
#[[The Gumbel/SEV Distribution]] <br />
#[[Non-Parametric Life Data Analysis]] <br />
#[[Competing Failure Modes Analysis]] <br />
#[[Warranty Data Analysis]] <br />
#[[Recurrent Event Data Analysis]] <br />
#[[Degradation Data Analysis]] <br />
#[[Reliability Test Design]] <br />
#Additional Reliability Analysis Tools <br />
#*[[Stress-Strength Analysis]] <br />
#*[[Comparing Life Data Sets]] <br />
#*[[Risk Analysis and Probabilistic Design with Monte Carlo Simulation]] <br />
#*[[Weibull++ SimuMatic]] <br />
#*[[Target Reliability Tool]] <br />
#*[[Event Log Data|Event Log Data Analysis]] <br />
#*[[Maintenance Planning|Maintenance Planning]] <br />
#Appendices <br />
#*[[Least Squares/Rank Regression Equations|Appendix A: Least Squares/Rank Regression Equations]] <br />
#*[[Appendix: Maximum Likelihood Estimation Example|Appendix B: Maximum Likelihood Estimation Example]] <br />
#*[[Appendix: Special Analysis Methods|Appendix C: Special Analysis Methods]] <br />
#*[[Appendix: Log-Likelihood Equations|Appendix D: Log-Likelihood Equations]] <br />
#*[[Appendix: Life Data Analysis References|Appendix E: Life Data Analysis References]]<br />
|}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="0" <br />
|-<br />
| align="center" valign="middle" bgcolor="#dddddd"; | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf|left|50px]]<p st#le="text-align: left;">[http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf Download this book as a print-ready *.pdf] -or-<br>[http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)<br />
|}<br />
<br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
| style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
<br> {{Allbooksindex footer|Weibull++ Examples|Weibull++}}<br />
[[Image:Weibull Examples Banner.png|link=Weibull++ Examples|center|300px]] <br />
|}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Template:LDABOOK&diff=50915Template:LDABOOK2014-02-18T21:04:51Z<p>Chuck Smith: </p>
<hr />
<div><div class="noprint"><br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
|style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(42,145,198); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
{{Font|Chapter {{{1}}}:|16|tahoma|bold|white}} {{Font| {{PAGENAME}}|16|tahoma|bold|white}}<br />
|}<br />
{| width="300" align="right" class="FCK__ShowTableBorders" border="0" cellspacing="0" cellpadding="5"<br />
|-<br />
| width="10" bgcolor="#ffffff" rowspan="2" | <br> <br />
| align="center" valign="middle" style="border: 1px solid rgb(206, 242, 224); color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" | <br />
[[Image:Weibullbox.png|100px|link=Life_Data_Analysis_Reference_Book]] <br>{{font|[[Life_Data_Analysis_Reference_Book|Index]]|12|bold|blue}} <br />
{| width="100%" align="center" class="FCK__ShowTableBorders" border="0" cellspacing="1" cellpadding="1"<br />
|-<br />
| align="center" valign="middle" | {{Font|Chapter {{{1}}}|16|tahoma|bold|white}}&nbsp;<br />
|-<br />
| align="center" valign="middle" rowspan="2" | {{Font|{{PAGENAME}}|12|tahoma|normal|black}}&nbsp;<br />
|}<br />
<br />
<span style="font-size: 9pt; font-weight: normal; font-name: tahoma;"> <br />
__TOC__ <br />
</span> <br />
[[Image:Examples_icon.png|link=Weibull++_Examples|left]]<p style="text-align: left;">'''More Resources:''' <br>[[Weibull++ Examples|Weibull++ Examples Collection]]</p><br />
[[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf|left|36px]]<p style="text-align: left;">'''Download Reference Book:''' <br>[http://www.synthesisplatform.net/references/Life_Data_Analysis_Reference.pdf Life Data Analysis (*.pdf)]<br></p><br />
[[Image:Generate_book.png|link=http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook|left|36px]]<p style="text-align: left;">'''Generate Reference Book:''' <br>[http://reliawiki.org/index.php/ReliaWiki:Books/Life_Data_Analysis_Reference_eBook File] may be more up-to-date<br />
</p><br />
|}<br />
</div> <br />
[[Category:Life_Data_Analysis_Reference]] [[Category:Reliability_Engineering_Textbooks]]<br />
<noinclude>==Usage==<br />
Please use this template for the main chapters in the Weibull++ book. The template takes one parameter, and that is the chapter number of the article.<br />
<pre>Here's an example: {{Template:LDABOOK|XX}}</pre><br />
{{Template:LDABOOK|XX}}<br />
[[Category:Templates]]<br />
</noinclude></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=File:Generate_book.png&diff=50914File:Generate book.png2014-02-18T20:56:19Z<p>Chuck Smith: </p>
<hr />
<div></div>Chuck Smithhttps://www.reliawiki.com/index.php?title=Experiment_Design_and_Analysis_Reference&diff=50913Experiment Design and Analysis Reference2014-02-18T20:44:18Z<p>Chuck Smith: </p>
<hr />
<div>{{Allbooksindex}}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="1"<br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#E5B21B"| <font color="#ffffff" size="3">ReliaSoft's Experiment Design and Analysis Reference</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#E5B21B" | <font color="#ffffff" size="4">Chapter Index</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" | <br />
#[[DOE Overview]]<br />
#[[Statistical Background on DOE]]<br />
#[[Simple Linear Regression Analysis]]<br />
#[[Multiple Linear Regression Analysis]]<br />
#[[One Factor Designs]]<br />
#[[General Full Factorial Designs]]<br />
#[[Randomization and Blocking in DOE]]<br />
#[[Two Level Factorial Experiments]]<br />
#[[Highly Fractional Factorial Designs]]<br />
#[[Response Surface Methods for Optimization]]<br />
#[[Design Evaluation and Power Study]]<br />
#[[Optimal Custom Designs]]<br />
#[[Robust Parameter Design]]<br />
#[[Reliability DOE for Life Tests]]<br />
#[[Measurement System Analysis]]<br />
#Appendices <br />
#*[[ANOVA Calculations in Multiple Linear Regression|Appendix A: ANOVA Calculations in Multiple Linear Regression]]<br />
#*[[Use of Regression to Calculate Sum of Squares|Appendix B: Use of Regression to Calculate Sum of Squares]]<br />
#*[[Plackett-Burman Designs|Appendix C: Plackett-Burman Designs]]<br />
#*[[Taguchi Orthogonal Arrays|Appendix D: Taguchi's Orthogonal Arrays]]<br />
#*[[Alias Relations for Taguchi Orthogonal Arrays|Appendix E: Alias Relations for Taguchi's Orthogonal Arrays]]<br />
#*[[Box-Behnken Designs|Appendix F: Box-Behnken Designs]]<br />
#*[[DOE Glossary|Appendix G: Glossary]]<br />
#*[[DOE References|Appendix H: References]]<br />
|}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="0" <br />
|-<br />
| align="center" valign="middle" bgcolor="#dddddd"; | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf|left|50px]]<p st#le="text-align: left;">[http://www.synthesisplatform.net/references/Experiment_Design_and_Analysis_Reference.pdf Download this Book in *.pdf Format] -or-<br>[http://reliawiki.org/index.php/ReliaWiki:Books/Experiment_Design_and_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)<br />
|}</div>Chuck Smithhttps://www.reliawiki.com/index.php?title=System_Analysis_Reference&diff=50912System Analysis Reference2014-02-18T20:43:18Z<p>Chuck Smith: </p>
<hr />
<div>{{Allbooksindex}}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="1"<br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#729F71"| <font color="#ffffff" size="3">ReliaSoft's System Analysis Reference</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" align="center" valign="top" bgcolor="#729F71" | <font color="#ffffff" size="4">Chapter Index</font> <br />
|- style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="left"<br />
| colspan="2" | <br />
#[[Basics of System Reliability Analysis]]<br />
#[[Statistical Background]]<br />
#[[RBDs and Analytical System Reliability]]<br />
#[[Time-Dependent System Reliability (Analytical)]]<br />
#[[Reliability Importance and Optimized Reliability Allocation (Analytical)]]<br />
#[[Introduction to Repairable Systems]]<br />
#[[Repairable Systems Analysis Through Simulation]]<br />
#[[Additional Analyses|Additional Simulation Analyses: Throughput and Life Cycle Cost Analysis]]<br />
#[[Fault Tree Diagrams and System Analysis]]<br />
#[[Reliability Phase Diagrams (RPDs)]]<br />
#Appendices<br />
##[[Appendix A: Generating Random Numbers from a Distribution]]<br />
##[[Appendix B: References]]<br />
|}<br />
{| width="600" border="0" align="center" cellpadding="3" cellspacing="0" <br />
|-<br />
| align="center" valign="middle" bgcolor="#dddddd"; | [[Image:Pdfdownload.png|link=http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf|left|50px]]<p st#le="text-align: left;">[http://www.synthesisplatform.net/references/System_Analysis_Reference.pdf Download this Book in *.pdf Format] -or-<br>[http://reliawiki.org/index.php/ReliaWiki:Books/System_Analysis_Reference_eBook Generate your own file] (may be more up-to-date)<br />
|}<br />
<br />
{| border="0" cellspacing="0" cellpadding="0" width="100%"<br />
|-<br />
| style="border-bottom: rgb(206,242,224) 1px solid; border-left: rgb(206,242,224) 1px solid; background-color: rgb(247,247,247); color: rgb(0,0,0); border-top: rgb(206,242,224) 1px solid; border-right: rgb(206,242,224) 1px solid;" valign="middle" align="center" | <br />
<br> {{Allbooksindex footer|BlockSim Examples|BlockSim}}<br />
[[Image:BlockSim Examples Banner.png|link=BlockSim Examples|center|300px]] <br />
|}</div>Chuck Smith