Multiple Linear Regression Analysis

Introduction
This chapter expands on the analysis of simple linear regression models and discusses the analysis of multiple linear regression models. A major portion of the results displayed in DOE++ are explained in this chapter because these results are associated with multiple linear regression. One of the applications of multiple linear regression models is Response Surface Methodology (RSM). RSM is a method used to locate the optimum value of the response and is one of the final stages of experimentation. It is discussed in Chapter 9. Towards the end of this chapter, the concept of using indicator variables in regression models is explained. Indicator variables are used to represent qualitative factors in regression models. The concept of using indicator variables is important to gain an understanding of ANOVA models, which are the models used to analyze data obtained from experiments. These models can be thought of as first order multiple linear regression models where all the factors are treated as qualitative factors. ANOVA models are discussed in Chapter 6.

Multiple Linear Regression Model
A linear regression model that contains more than one predictor variable is called a multiple linear regression model. The following model is a multiple linear regression model with two predictor variables, $${{x}_{1}}$$  and  $${{x}_{2}}$$.


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$

The model is linear because it is linear in the parameters $${{\beta }_{0}}$$,  $${{\beta }_{1}}$$  and  $${{\beta }_{2}}$$. The model describes a plane in the three dimensional space of $$Y$$,  $${{x}_{1}}$$  and  $${{x}_{2}}$$. The parameter $${{\beta }_{0}}$$  is the intercept of this plane. Parameters $${{\beta }_{1}}$$  and  $${{\beta }_{2}}$$  are referred to as partial regression coefficients. Parameter $${{\beta }_{1}}$$  represents the change in the mean response corresponding to a unit change in  $${{x}_{1}}$$  when  $${{x}_{2}}$$  is held constant. Parameter $${{\beta }_{2}}$$  represents the change in the mean response corresponding to a unit change in  $${{x}_{2}}$$  when  $${{x}_{1}}$$  is held constant. Consider the following example of a multiple linear regression model with two predictor variables, $${{x}_{1}}$$  and  $${{x}_{2}}$$ :


 * $$Y=30+5{{x}_{1}}+7{{x}_{2}}+\epsilon $$

This regression model is a first order multiple linear regression model. This is because the maximum power of the variables in the model is one. The regression plane corresponding to this model is shown in Figure TrueRegrPlane. Also shown is an observed data point and the corresponding random error, $$\epsilon $$. The true regression model is usually never known (and therefore the values of the random error terms corresponding to observed data points remain unknown). However, the regression model can be estimated by calculating the parameters of the model for an observed data set. This is explained in Section 5.MatrixApproach. Figure ContourPlot1 shows the contour plot for the regression model of Eqn. (FirstOrderModelExample). The contour plot shows lines of constant mean response values as a function of $${{x}_{1}}$$  and  $${{x}_{2}}$$. The contour lines for the given regression model are straight lines as seen on the plot. Straight contour lines result for first order regression models with no interaction terms. A linear regression model may also take the following form:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+\epsilon $$



A cross-product term, $${{x}_{1}}{{x}_{2}}$$, is included in the model. This term represents an interaction effect between the two variables $${{x}_{1}}$$  and  $${{x}_{2}}$$. Interaction means that the effect produced by a change in the predictor variable on the response depends on the level of the other predictor variable(s). As an example of a linear regression model with interaction, consider the model given by the equation $$Y=30+5{{x}_{1}}+7{{x}_{2}}+3{{x}_{1}}{{x}_{2}}+\epsilon $$. The regression plane and contour plot for this model are shown in Figures RegrPlaneWInteraction and ContourPlotWInteraction, respectively.

Now consider the regression model shown next:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}x_{1}^{2}+{{\beta }_{3}}x_{1}^{3}+\epsilon $$

This model is also a linear regression model and is referred to as a polynomial regression model. Polynomial regression models contain squared and higher order terms of the predictor variables making the response surface curvilinear. As an example of a polynomial regression model with an interaction term consider the following equation:


 * $$Y=500+5{{x}_{1}}+7{{x}_{2}}-3x_{1}^{2}-5x_{2}^{2}+3{{x}_{1}}{{x}_{2}}+\epsilon $$





This model is a second order model because the maximum power of the terms in the model is two. The regression surface for this model is shown in Figure PolynomialRegrSurface. Such regression models are used in RSM to find the optimum value of the response, $$Y$$  (for details see Chapter 9). Notice that, although the shape of the regression surface is curvilinear, the regression model of Eqn. (SecondOrderModelEx) is still linear because the model is linear in the parameters. The contour plot for this model is shown in Figure ContourPlotPolynomialRegr. All multiple linear regression models can be expressed in the following general form:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+...+{{\beta }_{k}}{{x}_{k}}+\epsilon $$

where $$k$$  denotes the number of terms in the model. For example, the model of Eqn. (SecondOrderModelEx) can be written in the general form using $${{x}_{3}}=x_{1}^{2}$$,  $${{x}_{4}}=x_{2}^{3}$$  and  $${{x}_{5}}={{x}_{1}}{{x}_{2}}$$  as follows:


 * $$Y=500+5{{x}_{1}}+7{{x}_{2}}-3{{x}_{3}}-5{{x}_{4}}+3{{x}_{5}}+\epsilon $$

Estimating Regression Models Using Least Squares
Consider a multiple linear regression model with $$k$$  predictor variables:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+...+{{\beta }_{k}}{{x}_{k}}+\epsilon $$

Let each of the $$k$$  predictor variables,  $${{x}_{1}}$$,  $${{x}_{2}}$$ ... $${{x}_{k}}$$ , have  $$n$$  levels. Then $${{x}_{ij}}$$  represents the  $$i$$ th level of the  $$j$$ th predictor variable  $${{x}_{j}}$$. For example, $${{x}_{51}}$$  represents the fifth level of the first predictor variable  $${{x}_{1}}$$, while  $${{x}_{19}}$$  represents the first level of the ninth predictor variable,  $${{x}_{9}}$$. Observations, $${{y}_{1}}$$,  $${{y}_{2}}$$ ... $${{y}_{n}}$$, recorded for each of these $$n$$  levels can be expressed in the following way:


 * $$\begin{align}

& {{y}_{1}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{11}}+{{\beta }_{2}}{{x}_{12}}+...+{{\beta }_{k}}{{x}_{1k}}+{{\epsilon }_{1}} \\ & {{y}_{2}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{21}}+{{\beta }_{2}}{{x}_{22}}+...+{{\beta }_{k}}{{x}_{2k}}+{{\epsilon }_{2}} \\ & & .. \\  & {{y}_{i}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{i1}}+{{\beta }_{2}}{{x}_{i2}}+...+{{\beta }_{k}}{{x}_{ik}}+{{\epsilon }_{i}} \\ & & .. \\  & {{y}_{n}}= & {{\beta }_{0}}+{{\beta }_{1}}{{x}_{n1}}+{{\beta }_{2}}{{x}_{n2}}+...+{{\beta }_{k}}{{x}_{nk}}+{{\epsilon }_{n}} \end{align}$$





The system of $$n$$  equations shown previously can be represented in matrix notation as follows:


 * $$y=X\beta +\epsilon $$


 * where


 * $$y=\left[ \begin{matrix}

{{y}_{1}} \\ {{y}_{2}} \\ . \\   .  \\   .  \\   {{y}_{n}}  \\ \end{matrix} \right]\text{     }X=\left[ \begin{matrix} 1 & {{x}_{11}} & {{x}_{12}} &. & . & . & {{x}_{1n}} \\ 1 & {{x}_{21}} & {{x}_{22}} &. & . & . & {{x}_{2n}} \\ . & . & . & {} & {} & {} & . \\   . & . & . & {} & {} & {} & .  \\   . & . & . & {} & {} & {} & .  \\   1 & {{x}_{n1}} & {{x}_{n2}} &. & . & . & {{x}_{nn}} \\ \end{matrix} \right]$$


 * $$\beta =\left[ \begin{matrix}

{{\beta }_{0}} \\ {{\beta }_{1}} \\ . \\   .  \\   .  \\   {{\beta }_{n}}  \\ \end{matrix} \right]\text{   and   }\epsilon =\left[ \begin{matrix} {{\epsilon }_{1}} \\ {{\epsilon }_{2}} \\ . \\   .  \\   .  \\   {{\epsilon }_{n}}  \\ \end{matrix} \right]$$

The matrix $$X$$  in Eqn. (TrueModelMatrixNotation) is referred to as the design matrix. It contains information about the levels of the predictor variables at which the observations are obtained. The vector $$\beta $$  contains all the regression coefficients. To obtain the regression model, $$\beta $$  should be known. $$\beta $$ is estimated using least square estimates. The following equation is used:


 * $$\hat{\beta }={{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y$$

where $$^{\prime }$$  represents the transpose of the matrix while  $$^{-1}$$  represents the matrix inverse. Knowing the estimates, $$\hat{\beta }$$, the multiple linear regression model can now be estimated as:


 * $$\hat{y}=X\hat{\beta }$$

The estimated regression model is also referred to as the fitted model. The observations, $${{y}_{i}}$$, may be different from the fitted values  $${{\hat{y}}_{i}}$$  obtained from this model. The difference between these two values is the residual, $${{e}_{i}}$$. The vector of residuals, $$e$$, is obtained as:


 * $$e=y-\hat{y}$$

The fitted model of Eqn. (FittedValueMatrixNotation) can also be written as follows, using $$\hat{\beta }={{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y$$  from Eqn. (LeastSquareEstimate):


 * $$\begin{align}

& \hat{y}= & X\hat{\beta } \\ & = & X{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ & = & Hy \end{align}$$

where $$H=X{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}$$. The matrix, $$H$$, is referred to as the hat matrix. It transforms the vector of the observed response values, $$y$$, to the vector of fitted values,  $$\hat{y}$$.

Example 1

An analyst studying a chemical process expects the yield to be affected by the levels of two factors, $${{x}_{1}}$$  and  $${{x}_{2}}$$. Observations recorded for various levels of the two factors are shown in Table 5.1. The analyst wants to fit a first order regression model to the data. Interaction between $${{x}_{1}}$$  and  $${{x}_{2}}$$  is not expected based on knowledge of similar processes. Units of the factor levels and the yield are ignored for the analysis.



The data of Table 5.1 can be entered into DOE++ using the Multiple Regression tool as shown in Figure MLRTDataEntrySshot. A scatter plot for the data in Table 5.1 is shown in Figure ThreedScatterPlot. The first order regression model applicable to this data set having two predictor variables is:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$

where the dependent variable, $$Y$$, represents the yield and the predictor variables,  $${{x}_{1}}$$  and  $${{x}_{2}}$$ , represent the two factors respectively. The $$X$$  and  $$y$$  matrices for the data can be obtained as:


 * $$X=\left[ \begin{matrix}

1 & 41.9 & 29.1 \\   1 & 43.4 & 29.3  \\   . & . & .  \\   . & . & .  \\   . & . & .  \\   1 & 77.8 & 32.9  \\ \end{matrix} \right]\text{     }y=\left[ \begin{matrix} 251.3 \\   251.3  \\   .  \\   .  \\   .  \\   349.0  \\ \end{matrix} \right]$$





The least square estimates, $$\hat{\beta }$$, can now be obtained:


 * $$\begin{align}

& \hat{\beta }= & {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ & = & {{\left[ \begin{matrix} 17 & 941 & 525.3 \\   941 & 54270 & 29286  \\   525.3 & 29286 & 16254  \\ \end{matrix} \right]}^{-1}}\left[ \begin{matrix} 4902.8 \\   276610  \\   152020  \\ \end{matrix} \right] \\ & = & \left[ \begin{matrix} -153.51 \\   1.24  \\   12.08  \\ \end{matrix} \right] \end{align}$$


 * Thus:


 * $$\hat{\beta }=\left[ \begin{matrix}

{{{\hat{\beta }}}_{0}} \\ {{{\hat{\beta }}}_{1}} \\ {{{\hat{\beta }}}_{2}} \\ \end{matrix} \right]=\left[ \begin{matrix} -153.51 \\   1.24  \\   12.08  \\ \end{matrix} \right]$$

and the estimated regression coefficients are $${{\hat{\beta }}_{0}}=-153.51$$,  $${{\hat{\beta }}_{1}}=1.24$$  and  $${{\hat{\beta }}_{2}}=12.08$$. The fitted regression model is:


 * $$\begin{align}

& \hat{y}= & {{{\hat{\beta }}}_{0}}+{{{\hat{\beta }}}_{1}}{{x}_{1}}+{{{\hat{\beta }}}_{2}}{{x}_{2}} \\ & = & -153.5+1.24{{x}_{1}}+12.08{{x}_{2}} \end{align}$$

In DOE++, the fitted regression model can be viewed using the Show Analysis Summary icon in the Control Panel. The model is shown in Figure EquationScreenshot.

A plot of the fitted regression plane is shown in Figure FittedRegrModel. The fitted regression model can be used to obtain fitted values, $${{\hat{y}}_{i}}$$, corresponding to an observed response value,  $${{y}_{i}}$$. For example, the fitted value corresponding to the fifth observation is:






 * $$\begin{align}

& {{{\hat{y}}}_{i}}= & -153.5+1.24{{x}_{i1}}+12.08{{x}_{i2}} \\ & {{{\hat{y}}}_{5}}= & -153.5+1.24{{x}_{51}}+12.08{{x}_{52}} \\ & = & -153.5+1.24(47.3)+12.08(29.9) \\ & = & 266.3  \end{align}$$

The observed fifth response value is $${{y}_{5}}=273.0$$. The residual corresponding to this value is:


 * $$\begin{align}

& {{e}_{i}}= & {{y}_{i}}-{{{\hat{y}}}_{i}} \\ & {{e}_{5}}= & {{y}_{5}}-{{{\hat{y}}}_{5}} \\ & = & 273.0-266.3 \\ & = & 6.7  \end{align}$$

In DOE++, fitted values and residuals are available using the Diagnostic icon in the Control Panel. The values are shown in Figure DiagnosticSshot. The fitted regression model can also be used to predict response values. For example, to obtain the response value for a new observation corresponding to 47 units of $${{x}_{1}}$$  and 31 units of  $${{x}_{2}}$$, the value is calculated using:


 * $$\begin{align}

& \hat{y}(47,31)= & -153.5+1.24(47)+12.08(31) \\ & = & 279.26 \end{align}$$

Properties of the Least Square Estimators, $$\hat{\beta }$$
The least square estimates, $${{\hat{\beta }}_{0}}$$,  $${{\hat{\beta }}_{1}}$$ ,  $${{\hat{\beta }}_{2}}$$ ... $${{\hat{\beta }}_{k}}$$ , are unbiased estimators of  $${{\beta }_{0}}$$ ,  $${{\beta }_{1}}$$ ,  $${{\beta }_{2}}$$ ... $${{\beta }_{k}}$$ , provided that the random error terms,  $${{\epsilon }_{i}}$$ , are normally and independently distributed. The variances of the $$\hat{\beta }$$ s are obtained using the  $${{({{X}^{\prime }}X)}^{-1}}$$  matrix. The variance-covariance matrix of the estimated regression coefficients is obtained as follows:


 * $$C={{\hat{\sigma }}^{2}}{{({{X}^{\prime }}X)}^{-1}}$$



$$C$$ is a symmetric matrix whose diagonal elements,  $${{C}_{jj}}$$, represent the variance of the estimated  $$j$$ th regression coefficient,  $${{\hat{\beta }}_{j}}$$. The off-diagonal elements, $${{C}_{ij}}$$, represent the covariance between the  $$i$$ th and  $$j$$ th estimated regression coefficients,  $${{\hat{\beta }}_{i}}$$  and  $${{\hat{\beta }}_{j}}$$. The value of $${{\hat{\sigma }}^{2}}$$  is obtained using the error mean square,  $$M{{S}_{E}}$$, which can be calculated as discussed in Section 5.MANOVA. The variance-covariance matrix for the data in Table 5.1 is shown in Figure VarCovMatrixSshot. It is available in DOE++ using the Show Analysis Summary icon in the Control Panel. Calculations to obtain the matrix are given in Example 3 in Section 5.tTest. The positive square root of $${{C}_{jj}}$$  represents the estimated standard deviation of the  $$j$$ th regression coefficient,  $${{\hat{\beta }}_{j}}$$, and is called the estimated standard error of  $${{\hat{\beta }}_{j}}$$  (abbreviated  $$se({{\hat{\beta }}_{j}})$$ ).


 * $$se({{\hat{\beta }}_{j}})=\sqrt$$



Hypothesis Tests in Multiple Linear Regression
This section discusses hypothesis tests on the regression coefficients in multiple linear regression. As in the case of simple linear regression, these tests can only be carried out if it can be assumed that the random error terms, $${{\epsilon }_{i}}$$, are normally and independently distributed with a mean of zero and variance of  $${{\sigma }^{2}}$$. Three types of hypothesis tests can be carried out for multiple linear regression models:
 * •	Test for significance of regression

This test checks the significance of the whole regression model.


 * •	 $$t$$ test

This test checks the significance of individual regression coefficients.


 * •	Partial $$F$$  test

This test can be used to simultaneously check the significance of a number of regression coefficients. It can also be used to test individual coefficients.

Test for Significance of Regression
The test for significance of regression in the case of multiple linear regression analysis is carried out using the analysis of variance. The test is used to check if a linear statistical relationship exists between the response variable and at least one of the predictor variables. The statements for the hypotheses are:


 * $$\begin{align}

& {{H}_{0}}: & {{\beta }_{1}}={{\beta }_{2}}=...={{\beta }_{k}}=0 \\ & {{H}_{1}}: & {{\beta }_{j}}\ne 0\text{    for at least one }j \end{align}$$

The test for $${{H}_{0}}$$  is carried out using the following statistic:


 * $${{F}_{0}}=\frac{M{{S}_{R}}}{M{{S}_{E}}}$$

where $$M{{S}_{R}}$$  is the regression mean square and  $$M{{S}_{E}}$$  is the error mean square. If the null hypothesis, $${{H}_{0}}$$, is true then the statistic  $${{F}_{0}}$$  follows the  $$F$$  distribution with  $$k$$  degrees of freedom in the numerator and  $$n-$$ ( $$k+1$$ ) degrees of freedom in the denominator. The null hypothesis, $${{H}_{0}}$$, is rejected if the calculated statistic,  $${{F}_{0}}$$ , is such that:


 * $${{F}_{0}}>{{f}_{\alpha ,k,n-(k+1)}}$$

Calculation of the Statistic $${{F}_{0}}$$
To calculate the statistic $${{F}_{0}}$$, the mean squares  $$M{{S}_{R}}$$  and  $$M{{S}_{E}}$$  must be known. As explained in Chapter 4, the mean squares are obtained by dividing the sum of squares by their degrees of freedom. For example, the total mean square, $$M{{S}_{T}}$$, is obtained as follows:


 * $$M{{S}_{T}}=\frac{S{{S}_{T}}}{dof(S{{S}_{T}})}$$

where $$S{{S}_{T}}$$  is the total sum of squares and  $$dof(S{{S}_{T}})$$  is the number of degrees of freedom associated with  $$S{{S}_{T}}$$. In multiple linear regression, the following equation is used to calculate $$S{{S}_{T}}$$ :


 * $$S{{S}_{T}}={{y}^{\prime }}\left[ I-(\frac{1}{n})J \right]y$$

where $$n$$  is the total number of observations,  $$y$$  is the vector of observations (that was defined in Section 5.MatrixApproach),  $$I$$  is the identity matrix of order  $$n$$  and  $$J$$  represents an  $$n\times n$$  square matrix of ones. The number of degrees of freedom associated with $$S{{S}_{T}}$$,  $$dof(S{{S}_{T}})$$ , is ( $$n-1$$ ). Knowing $$S{{S}_{T}}$$  and  $$dof(S{{S}_{T}})$$  the total mean square,  $$M{{S}_{T}}$$, can be calculated.

The regression mean square, $$M{{S}_{R}}$$, is obtained by dividing the regression sum of squares,  $$S{{S}_{R}}$$ , by the respective degrees of freedom,  $$dof(S{{S}_{R}})$$ , as follows:


 * $$M{{S}_{R}}=\frac{S{{S}_{R}}}{dof(S{{S}_{R}})}$$

The regression sum of squares, $$S{{S}_{R}}$$, is calculated using the following equation:


 * $$S{{S}_{R}}={{y}^{\prime }}\left[ H-(\frac{1}{n})J \right]y$$

where $$n$$  is the total number of observations,  $$y$$  is the vector of observations,  $$H$$  is the hat matrix (that was defined in Section 5.MatrixApproach) and  $$J$$  represents an  $$n\times n$$  square matrix of ones. The number of degrees of freedom associated with $$S{{S}_{R}}$$,  $$dof(S{{S}_{E}})$$ , is  $$k$$ , where  $$k$$  is the number of predictor variables in the model. Knowing $$S{{S}_{R}}$$  and  $$dof(S{{S}_{R}})$$  the regression mean square,  $$M{{S}_{R}}$$, can be calculated. The error mean square, $$M{{S}_{E}}$$, is obtained by dividing the error sum of squares,  $$S{{S}_{E}}$$ , by the respective degrees of freedom,  $$dof(S{{S}_{E}})$$ , as follows:


 * $$M{{S}_{E}}=\frac{S{{S}_{E}}}{dof(S{{S}_{E}})}$$

The error sum of squares, $$S{{S}_{E}}$$, is calculated using the following equation:


 * $$S{{S}_{E}}={{y}^{\prime }}(I-H)y$$

where $$y$$  is the vector of observations,  $$I$$  is the identity matrix of order  $$n$$  and  $$H$$  is the hat matrix. The number of degrees of freedom associated with $$S{{S}_{E}}$$,  $$dof(S{{S}_{E}})$$ , is  $$n-(k+1)$$ , where  $$n$$  is the total number of observations and  $$k$$  is the number of predictor variables in the model. Knowing $$S{{S}_{E}}$$  and  $$dof(S{{S}_{E}})$$, the error mean square,  $$M{{S}_{E}}$$ , can be calculated. The error mean square is an estimate of the variance, $${{\sigma }^{2}}$$, of the random error terms,  $${{\epsilon }_{i}}$$.


 * $${{\hat{\sigma }}^{2}}=M{{S}_{E}}$$

Example 2

The test for the significance of regression, for the regression model obtained for the data in Table 5.1, is illustrated in this example. The null hypothesis for the model is:


 * $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}={{\beta }_{2}}=0$$

The statistic to test $${{H}_{0}}$$  is:


 * $${{F}_{0}}=\frac{M{{S}_{R}}}{M{{S}_{E}}}$$

To calculate $${{F}_{0}}$$, first the sum of squares are calculated so that the mean squares can be obtained. Then the mean squares are used to calculate the statistic $${{F}_{0}}$$  to carry out the significance test. The regression sum of squares, $$S{{S}_{R}}$$, can be obtained as:


 * $$S{{S}_{R}}={{y}^{\prime }}\left[ H-(\frac{1}{n})J \right]y$$

The hat matrix, $$H$$  is calculated as follows using the design matrix  $$X$$  from Example 1:


 * $$\begin{align}

& H= & X{{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }} \\ & = & \left[ \begin{matrix} 0.27552 & 0.25154 & . & . & -0.04030 \\   0.25154 & 0.23021 & . & . & -0.029120  \\   . & . & . & . & .  \\   . & . & . & . & .  \\   -0.04030 & -0.02920 & . & . & 0.30115  \\ \end{matrix} \right] \end{align}$$

Knowing $$y$$,  $$H$$  and  $$J$$ , the regression sum of squares,  $$S{{S}_{R}}$$ , can be calculated:


 * $$\begin{align}

& S{{S}_{R}}= & {{y}^{\prime }}\left[ H-(\frac{1}{n})J \right]y \\ & = & 12816.35 \end{align}$$

The degrees of freedom associated with $$S{{S}_{R}}$$  is  $$k$$, which equals to a value of two since there are two predictor variables in the data in Table 5.1. Therefore, the regression mean square is:


 * $$\begin{align}

& M{{S}_{R}}= & \frac{S{{S}_{R}}}{dof(S{{S}_{R}})} \\ & = & \frac{12816.35}{2} \\ & = & 6408.17 \end{align}$$

Similarly to calculate the error mean square, $$M{{S}_{E}}$$, the error sum of squares,  $$S{{S}_{E}}$$ , can be obtained as:


 * $$\begin{align}

& S{{S}_{E}}= & {{y}^{\prime }}\left[ I-H \right]y \\ & = & 423.37 \end{align}$$

The degrees of freedom associated with $$S{{S}_{E}}$$  is  $$n-(k+1)$$. Therefore, the error mean square, $$M{{S}_{E}}$$, is:


 * $$\begin{align}

& M{{S}_{E}}= & \frac{S{{S}_{E}}}{dof(S{{S}_{E}})} \\ & = & \frac{S{{S}_{E}}}{(n-(k+1))} \\ & = & \frac{423.37}{(17-(2+1))} \\ & = & 30.24 \end{align}$$

The statistic to test the significance of regression can now be calculated as:


 * $$\begin{align}

& {{f}_{0}}= & \frac{M{{S}_{R}}}{M{{S}_{E}}} \\ & = & \frac{6408.17}{423.37/(17-3)} \\ & = & 211.9 \end{align}$$

The critical value for this test, corresponding to a significance level of 0.1, is:


 * $$\begin{align}

& {{f}_{\alpha ,k,n-(k+1)}}= & {{f}_{0.1,2,14}} \\ & = & 2.726 \end{align}$$

Since $${{f}_{0}}>{{f}_{0.1,2,14}}$$,  $${{H}_{0}}\ \ :$$   $${{\beta }_{1}}={{\beta }_{2}}=0$$  is rejected and it is concluded that at least one coefficient out of  $${{\beta }_{1}}$$  and  $${{\beta }_{2}}$$  is significant. In other words, it is concluded that a regression model exists between yield and either one or both of the factors in Table 5.1. The analysis of variance is summarized in Table 5.2.



Test on Individual Regression Coefficients ( $$t$$ Test)
The $$t$$  test is used to check the significance of individual regression coefficients in the multiple linear regression model. Adding a significant variable to a regression model makes the model more effective, while adding an unimportant variable may make the model worse. The hypothesis statements to test the significance of a particular regression coefficient, $${{\beta }_{j}}$$, are:


 * $$\begin{align}

& {{H}_{0}}: & {{\beta }_{j}}=0 \\ & {{H}_{1}}: & {{\beta }_{j}}\ne 0 \end{align}$$

The test statistic for this test is based on the $$t$$  distribution (and is similar to the one used in the case of simple linear regression models in Chapter 4):


 * $${{T}_{0}}=\frac{se({_{j}})}$$

where the standard error, $$se({{\hat{\beta }}_{j}})$$, is obtained from Eqn. (StandardErrorBetaJ). The analyst would fail to reject the null hypothesis if the test statistic, calculated using Eqn. (TtestStatistic), lies in the acceptance region:


 * $$-{{t}_{\alpha /2,n-2}}<{{T}_{0}}<{{t}_{\alpha /2,n-2}}$$

This test measures the contribution of a variable while the remaining variables are included in the model. For the model $$\hat{y}={{\hat{\beta }}_{0}}+{{\hat{\beta }}_{1}}{{x}_{1}}+{{\hat{\beta }}_{2}}{{x}_{2}}+{{\hat{\beta }}_{3}}{{x}_{3}}$$, if the test is carried out for  $${{\beta }_{1}}$$ , then the test will check the significance of including the variable  $${{x}_{1}}$$  in the model that contains  $${{x}_{2}}$$  and  $${{x}_{3}}$$  (i.e. the model  $$\hat{y}={{\hat{\beta }}_{0}}+{{\hat{\beta }}_{2}}{{x}_{2}}+{{\hat{\beta }}_{3}}{{x}_{3}}$$ ). Hence the test is also referred to as partial or marginal test. In DOE++, this test is displayed in the Regression Information table.

Example 3

The test to check the significance of the estimated regression coefficients for the data in Table 5.1 is illustrated in this example. The null hypothesis to test the coefficient $${{\beta }_{2}}$$  is:


 * $${{H}_{0}}\ \ :\ \ {{\beta }_{2}}=0$$

The null hypothesis to test $${{\beta }_{1}}$$  can be obtained in a similar manner. To calculate the test statistic, $${{T}_{0}}$$, we need to calculate the standard error using Eqn. (StandardErrorBetaJ). In Example 2, the value of the error mean square, $$M{{S}_{E}}$$, was obtained as 30.24. The error mean square is an estimate of the variance, $${{\sigma }^{2}}$$.


 * Therefore:


 * $$\begin{align}

& {{{\hat{\sigma }}}^{2}}= & M{{S}_{E}} \\ & = & 30.24 \end{align}$$

The variance-covariance matrix of the estimated regression coefficients is:


 * $$\begin{align}

& C= & {{{\hat{\sigma }}}^{2}}{{({{X}^{\prime }}X)}^{-1}} \\ & = & 30.24\left[ \begin{matrix} 336.5 & 1.2 & -13.1 \\   1.2 & 0.005 & -0.049  \\   -13.1 & -0.049 & 0.5  \\ \end{matrix} \right] \\ & = & \left[ \begin{matrix} 10176.75 & 37.145 & -395.83 \\   37.145 & 0.1557 & -1.481  \\   -395.83 & -1.481 & 15.463  \\ \end{matrix} \right] \end{align}$$

From the diagonal elements of $$C$$, the estimated standard error for  $${{\hat{\beta }}_{1}}$$  and  $${{\hat{\beta }}_{2}}$$  is:


 * $$\begin{align}

& se({_{1}})= & \sqrt{0.1557}=0.3946 \\ & se({_{2}})= & \sqrt{15.463}=3.93 \end{align}$$

The corresponding test statistics for these coefficients are:


 * $$\begin{align}

& {{({{t}_{0}})}_}= & \frac{se({_{1}})}=\frac{1.24}{0.3946}=3.1393 \\ & {{({{t}_{0}})}_{{{{\hat{\beta }}}_{2}}}}= & \frac{{{{\hat{\beta }}}_{2}}}{se({{{\hat{\beta }}}_{2}})}=\frac{12.08}{3.93}=3.0726 \end{align}$$

The critical values for the present $$t$$  test at a significance of 0.1 are:


 * $$\begin{align}

& {{t}_{\alpha /2,n-(k+1)}}= & {{t}_{0.05,14}}=1.761 \\ & -{{t}_{\alpha /2,n-(k+1)}}= & -{{t}_{0.05,14}}=-1.761 \end{align}$$

Considering $${{\hat{\beta }}_{2}}$$, it can be seen that  $${{({{t}_{0}})}_{{{{\hat{\beta }}}_{2}}}}$$  does not lie in the acceptance region of  $$-{{t}_{0.05,14}}<{{t}_{0}}<{{t}_{0.05,14}}$$. The null hypothesis, $${{H}_{0}}\ \ :\ \ {{\beta }_{2}}=0$$, is rejected and it is concluded that  $${{\beta }_{2}}$$  is significant at  $$\alpha =0.1$$. This conclusion can also be arrived at using the $$p$$  value noting that the hypothesis is two-sided. The $$p$$  value corresponding to the test statistic,  $${{({{t}_{0}})}_{{{{\hat{\beta }}}_{2}}}}=$$   $$3.0726$$, based on the  $$t$$  distribution with 14 degrees of freedom is:


 * $$\begin{align}

& p\text{ }value= & 2\times (1-P(T\le |{{t}_{0}}|) \\ & = & 2\times (1-0.9959) \\  & = & 0.0083  \end{align}$$

Since the $$p$$  value is less than the significance,  $$\alpha =0.1$$, it is concluded that  $${{\beta }_{2}}$$  is significant. The hypothesis test on $${{\beta }_{1}}$$  can be carried out in a similar manner.

As explained in Chapter 4, in DOE++, the information related to the $$t$$  test is displayed in the Regression Information table as shown in Figure RegrInfoSshot. In this table, the $$t$$  test for  $${{\beta }_{2}}$$  is displayed in the row for the term Factor 2 because  $${{\beta }_{2}}$$  is the coefficient that represents this factor in the regression model. Columns labeled Standard Error, T Value and P Value represent the standard error, the test statistic for the $$t$$  test and the  $$p$$  value for the  $$t$$  test, respectively. These values have been calculated for $${{\beta }_{2}}$$  in this example. The Coefficient column represents the estimate of regression coefficients. These values are calculated using Eqn. (LeastSquareEstimate) as shown in Example


 * 1. The Effect column represents values obtained by multiplying the coefficients by a factor of
 * 2. This value is useful in the case of two factor experiments and is explained in Chapter 7.

Columns labeled Low CI and High CI represent the limits of the confidence intervals for the regression coefficients and are explained in Section 5.RegrCoeffCI. The Variance Inflation Factor column displays values that give a measure of multicollinearity. This is explained in Section 5.MultiCollinearity.



Test on Subsets of Regression Coefficients (Partial $$F$$ Test)
This test can be considered to be the general form of the $$t$$  test mentioned in the previous section. This is because the test simultaneously checks the significance of including many (or even one) regression coefficients in the multiple linear regression model. Adding a variable to a model increases the regression sum of squares, $$S{{S}_{R}}$$. The test is based on this increase in the regression sum of squares. The increase in the regression sum of squares is called the extra sum of squares. Assume that the vector of the regression coefficients, $$\beta $$, for the multiple linear regression model,  $$y=X\beta +\epsilon $$ , is partitioned into two vectors with the second vector,  $${{\beta }_{2}}$$ , containing the last  $$r$$  regression coefficients, and the first vector,  $${{\beta }_{1}}$$ , containing the first ( $$k+1-r$$ ) coefficients as follows:


 * $$\beta =\left[ \begin{matrix}

{{\beta }_{1}} \\ {{\beta }_{2}} \\ \end{matrix} \right]$$


 * with:


 * $${{\beta }_{1}}=[{{\beta }_{0}},{{\beta }_{1}}...{{\beta }_{k-r}}{]}'\text{ and }{{\beta }_{2}}=[{{\beta }_{k-r+1}},{{\beta }_{k-r+2}}...{{\beta }_{k}}{]}'\text{   }$$

The hypothesis statements to test the significance of adding the regression coefficients in $${{\beta }_{2}}$$  to a model containing the regression coefficients in  $${{\beta }_{1}}$$  may be written as:


 * $$\begin{align}

& {{H}_{0}}: & {{\beta }_{2}}=0 \\ & {{H}_{1}}: & {{\beta }_{2}}\ne 0 \end{align}$$

The test statistic for this test follows the $$F$$  distribution and can be calculated as follows:


 * $${{F}_{0}}=\frac{S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})/r}{M{{S}_{E}}}$$

where $$S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})$$  is the the increase in the regression sum of squares when the variables corresponding to the coefficients in  $${{\beta }_{2}}$$  are added to a model already containing  $${{\beta }_{1}}$$, and  $$M{{S}_{E}}$$  is obtained from Eqn. (ErrorMeanSquare). The value of the extra sum of squares is obtained as explained in the next section.

The null hypothesis, $${{H}_{0}}$$, is rejected if  $${{F}_{0}}>{{f}_{\alpha ,r,n-(k+1)}}$$. Rejection of $${{H}_{0}}$$  leads to the conclusion that at least one of the variables in  $${{x}_{k-r+1}}$$,  $${{x}_{k-r+2}}$$ ... $${{x}_{k}}$$  contributes significantly to the regression model. In DOE++, the results from the partial $$F$$  test are displayed in the ANOVA table.

Types of Extra Sum of Squares
The extra sum of squares can be calculated using either the partial (or adjusted) sum of squares or the sequential sum of squares. The type of extra sum of squares used affects the calculation of the test statistic of Eqn. (PartialFtest). In DOE++, selection for the type of extra sum of squares is available in the Options tab of the Control Panel as shown in Figure SSselectionSshot. The partial sum of squares is used as the default setting. The reason for this is explained in the following section on the partial sum of squares.



Partial Sum of Squares
The partial sum of squares for a term is the extra sum of squares when all terms, except the term under consideration, are included in the model. For example, consider the model:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+\epsilon $$

Assume that we need to know the partial sum of squares for $${{\beta }_{2}}$$. The partial sum of squares for $${{\beta }_{2}}$$  is the increase in the regression sum of squares when  $${{\beta }_{2}}$$  is added to the model. This increase is the difference in the regression sum of squares for the full model of Eqn. (PartialSSFullModel) and the model that includes all terms except $${{\beta }_{2}}$$. These terms are $${{\beta }_{0}}$$,  $${{\beta }_{1}}$$  and  $${{\beta }_{12}}$$. The model that contains these terms is:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+\epsilon $$

The partial sum of squares for $${{\beta }_{2}}$$  can be represented as  $$S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}})$$  and is calculated as follows:


 * $$\begin{align}

& S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}})= & S{{S}_{R}}\text{ for Eqn}\text{. }-S{{S}_{R}}\text{ for Eqn}\text{. } \\ & = & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}})-S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}}) \end{align}$$

For the present case, $${{\beta }_{2}}=[{{\beta }_{2}}{]}'$$  and  $${{\beta }_{1}}=[{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{12}}{]}'$$. It can be noted that for the partial sum of squares $${{\beta }_{1}}$$  contains all coefficients other than the coefficient being tested.

DOE++ has the partial sum of squares as the default selection. This is because the $$t$$  test explained in Section 5.tTest is a partial test, i.e. the  $$t$$  test on an individual coefficient is carried by assuming that all the remaining coefficients are included in the model (similar to the way the partial sum of squares is calculated). The results from the $$t$$  test are displayed in the Regression Information table. The results from the partial $$F$$  test are displayed in the ANOVA table. To keep the results in the two tables consistent with each other, the partial sum of squares is used as the default selection for the results displayed in the ANOVA table. The partial sum of squares for all terms of a model may not add up to the regression sum of squares for the full model when the regression coefficients are correlated. If it is preferred that the extra sum of squares for all terms in the model always add up to the regression sum of squares for the full model then the sequential sum of squares should be used.

Example 4

This example illustrates the partial $$F$$  test using the partial sum of squares. The test is conducted for the coefficient $${{\beta }_{1}}$$  corresponding to the predictor variable  $${{x}_{1}}$$  for the data in Table 5.1. The regression model used for this data set in Example 1 is:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$

The null hypothesis to test the significance of $${{\beta }_{1}}$$  is:


 * $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0$$

The statistic to test this hypothesis is:


 * $${{F}_{0}}=\frac{S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})/r}{M{{S}_{E}}}$$

where $$S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})$$  represents the partial sum of squares for  $${{\beta }_{1}}$$,  $$r$$  represents the number of degrees of freedom for  $$S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})$$  (which is one because there is just one coefficient,  $${{\beta }_{1}}$$ , being tested) and  $$M{{S}_{E}}$$  is the error mean square that can obtained using Eqn. (ErrorMeanSquare) and has been calculated in Example 2 as 30.24.

The partial sum of squares for $${{\beta }_{1}}$$  is the difference between the regression sum of squares for the full model,  $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$, and the regression sum of squares for the model excluding  $${{\beta }_{1}}$$ ,  $$Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$. The regression sum of squares for the full model can be obtained using Eqn. (TotalSumofSquares) and has been calculated in Example 2 as $$12816.35$$. Therefore:


 * $$S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}})=12816.35$$

The regression sum of squares for the model $$Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$  is obtained as shown next. First the design matrix for this model, $${{X}_{{{\beta }_{0}},{{\beta }_{2}}}}$$, is obtained by dropping the second column in the design matrix of the full model,  $$X$$  (the full design matrix,  $$X$$ , was obtained in Example 1). The second column of $$X$$  corresponds to the coefficient  $${{\beta }_{1}}$$  which is no longer in the model. Therefore, the design matrix for the model, $$Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$, is:


 * $${{X}_{{{\beta }_{0}},{{\beta }_{2}}}}=\left[ \begin{matrix}

1 & 29.1 \\   1 & 29.3  \\   . & .  \\   . & .  \\   1 & 32.9  \\ \end{matrix} \right]$$

The hat matrix corresponding to this design matrix is $${{H}_{{{\beta }_{0}},{{\beta }_{2}}}}$$. It can be calculated using $${{H}_{{{\beta }_{0}},{{\beta }_{2}}}}={{X}_{{{\beta }_{0}},{{\beta }_{2}}}}{{(X_{{{\beta }_{0}},{{\beta }_{2}}}^{\prime }{{X}_{{{\beta }_{0}},{{\beta }_{2}}}})}^{-1}}X_{{{\beta }_{0}},{{\beta }_{2}}}^{\prime }$$. Once $${{H}_{{{\beta }_{0}},{{\beta }_{2}}}}$$  is known, the regression sum of squares for the model  $$Y={{\beta }_{0}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$, can be calculated using Eqn. (RegressionSumofSquares) as:


 * $$\begin{align}

& S{{S}_{R}}({{\beta }_{0}},{{\beta }_{2}})= & {{y}^{\prime }}\left[ {{H}_{{{\beta }_{0}},{{\beta }_{2}}}}-(\frac{1}{n})J \right]y \\ & = & 12518.32 \end{align}$$

Therefore, the partial sum of squares for $${{\beta }_{1}}$$  is:


 * $$\begin{align}

& S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})= & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}})-S{{S}_{R}}({{\beta }_{0}},{{\beta }_{2}}) \\ & = & 12816.35-12518.32 \\ & = & 298.03  \end{align}$$

Knowing the partial sum of squares, the statistic to test the significance of $${{\beta }_{1}}$$  is:


 * $$\begin{align}

& {{f}_{0}}= & \frac{S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})/r}{M{{S}_{E}}} \\ & = & \frac{298.03/1}{30.24} \\ & = & 9.855 \end{align}$$

The $$p$$  value corresponding to this statistic based on the  $$F$$  distribution with 1 degree of freedom in the numerator and 14 degrees of freedom in the denominator is:


 * $$\begin{align}

& p\text{ }value= & 1-P(F\le {{f}_{0}}) \\ & = & 1-0.9928 \\ & = & 0.0072  \end{align}$$

Assuming that the desired significance is 0.1, since $$p$$  value < 0.1,  $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0$$  is rejected and it can be concluded that  $${{\beta }_{1}}$$  is significant. The test for $${{\beta }_{2}}$$  can be carried out in a similar manner. In the results obtained from DOE++, the calculations for this test are displayed in the ANOVA table as shown in Figure AnovaTableSshot. Note that the conclusion obtained in this example can also be obtained using the $$t$$  test as explained in Example 3 in Section 5.tTest. The ANOVA and Regression Information tables in DOE++ represent two different ways to test for the significance of the variables included in the multiple linear regression model.

Sequential Sum of Squares
The sequential sum of squares for a coefficient is the extra sum of squares when coefficients are added to the model in a sequence. For example, consider the model:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+{{\beta }_{23}}{{x}_{2}}{{x}_{3}}+{{\beta }_{123}}{{x}_{1}}{{x}_{2}}{{x}_{3}}+\epsilon $$

The sequential sum of squares for $${{\beta }_{13}}$$  is the increase in the sum of squares when  $${{\beta }_{13}}$$  is added to the model observing the sequence of Eqn. (SeqSSEqn). Therefore this extra sum of squares can be obtained by taking the difference between the regression sum of squares for the model after $${{\beta }_{13}}$$  was added and the regression sum of squares for the model before  $${{\beta }_{13}}$$  was added to the model. The model after $${{\beta }_{13}}$$  is added is as follows:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+{{\beta }_{13}}{{x}_{1}}{{x}_{3}}+\epsilon $$



This is because to maintain the sequence of Eqn. (SeqSSEqn) all coefficients preceding $${{\beta }_{13}}$$  must be included in the model. These are the coefficients $${{\beta }_{0}}$$,  $${{\beta }_{1}}$$ ,  $${{\beta }_{2}}$$ ,  $${{\beta }_{12}}$$  and  $${{\beta }_{3}}$$. Similarly the model before $${{\beta }_{13}}$$  is added must contain all coefficients of Eqn. (SeqSSEqnafter) except $${{\beta }_{13}}$$. This model can be obtained as follows:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+{{\beta }_{12}}{{x}_{1}}{{x}_{2}}+{{\beta }_{3}}{{x}_{3}}+\epsilon $$

The sequential sum of squares for $${{\beta }_{13}}$$  can be calculated as follows:


 * $$\begin{align}

& S{{S}_{R}}({{\beta }_{13}}|{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}})= & S{{S}_{R}}\text{ for Eqn}\text{.}-S{{S}_{R}}\text{ for Eqn}\text{.} \\ & = & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}},{{\beta }_{13}})- \\ & & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}}) \end{align}$$

For the present case, $${{\beta }_{2}}=[{{\beta }_{13}}{]}'$$  and  $${{\beta }_{1}}=[{{\beta }_{0}},{{\beta }_{1}},{{\beta }_{2}},{{\beta }_{12}},{{\beta }_{3}}{]}'$$. It can be noted that for the sequential sum of squares $${{\beta }_{1}}$$  contains all coefficients proceeding the coefficient being tested.

The sequential sum of squares for all terms will add up to the regression sum of squares for the full model, but the sequential sum of squares are order dependent.

Example 5

This example illustrates the partial $$F$$  test using the sequential sum of squares. The test is conducted for the coefficient $${{\beta }_{1}}$$  corresponding to the predictor variable  $${{x}_{1}}$$  for the data in Table 5.1. The regression model used for this data set in Example 1 is:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$

The null hypothesis to test the significance of $${{\beta }_{1}}$$  is:


 * $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0$$

The statistic to test this hypothesis is:


 * $${{F}_{0}}=\frac{S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})/r}{M{{S}_{E}}}$$

where $$S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})$$  represents the sequential sum of squares for  $${{\beta }_{1}}$$,  $$r$$  represents the number of degrees of freedom for  $$S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})$$  (which is one because there is just one coefficient,  $${{\beta }_{1}}$$ , being tested) and  $$M{{S}_{E}}$$  is the error mean square that can obtained using Eqn. (ErrorMeanSquare) and has been calculated in Example 2 as 30.24.

The sequential sum of squares for $${{\beta }_{1}}$$  is the difference between the regression sum of squares for the model after adding  $${{\beta }_{1}}$$,  $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon $$ , and the regression sum of squares for the model before adding  $${{\beta }_{1}}$$ ,  $$Y={{\beta }_{0}}+\epsilon $$. The regression sum of squares for the model $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon $$  is obtained as shown next. First the design matrix for this model, $${{X}_{{{\beta }_{0}},{{\beta }_{1}}}}$$, is obtained by dropping the third column in the design matrix for the full model,  $$X$$  (the full design matrix,  $$X$$ , was obtained in Example 1). The third column of $$X$$  corresponds to coefficient  $${{\beta }_{2}}$$  which is no longer used in the present model. Therefore, the design matrix for the model, $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon $$, is:


 * $${{X}_{{{\beta }_{0}},{{\beta }_{1}}}}=\left[ \begin{matrix}

1 & 41.9 \\   1 & 43.4  \\   . & .  \\   . & .  \\   1 & 77.8  \\ \end{matrix} \right]$$

The hat matrix corresponding to this design matrix is $${{H}_{{{\beta }_{0}},{{\beta }_{1}}}}$$. It can be calculated using $${{H}_{{{\beta }_{0}},{{\beta }_{1}}}}={{X}_{{{\beta }_{0}},{{\beta }_{1}}}}{{(X_{{{\beta }_{0}},{{\beta }_{1}}}^{\prime }{{X}_{{{\beta }_{0}},{{\beta }_{1}}}})}^{-1}}X_{{{\beta }_{0}},{{\beta }_{1}}}^{\prime }$$. Once $${{H}_{{{\beta }_{0}},{{\beta }_{1}}}}$$  is known, the regression sum of squares for the model  $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+\epsilon $$  can be calculated using Eqn. (RegressionSumofSquares) as:


 * $$\begin{align}

& S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})= & {{y}^{\prime }}\left[ {{H}_{{{\beta }_{0}},{{\beta }_{1}}}}-(\frac{1}{n})J \right]y \\ & = & 12530.85 \end{align}$$



The regression sum of squares for the model $$Y={{\beta }_{0}}+\epsilon $$  is equal to zero since this model does not contain any variables. Therefore:


 * $$S{{S}_{R}}({{\beta }_{0}})=0$$

The sequential sum of squares for $${{\beta }_{1}}$$  is:


 * $$\begin{align}

& S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})= & S{{S}_{R}}({{\beta }_{0}},{{\beta }_{1}})-S{{S}_{R}}({{\beta }_{0}}) \\ & = & 12530.85-0 \\ & = & 12530.85  \end{align}$$

Knowing the sequential sum of squares, the statistic to test the significance of $${{\beta }_{1}}$$  is:


 * $$\begin{align}

& {{f}_{0}}= & \frac{S{{S}_{R}}({{\beta }_{2}}|{{\beta }_{1}})/r}{M{{S}_{E}}} \\ & = & \frac{12530.85/1}{30.24} \\ & = & 414.366 \end{align}$$

The $$p$$  value corresponding to this statistic based on the  $$F$$  distribution with 1 degree of freedom in the numerator and 14 degrees of freedom in the denominator is:


 * $$\begin{align}

& p\text{ }value= & 1-P(F\le {{f}_{0}}) \\ & = & 1-0.999999 \\ & = & 8.46\times {{10}^{-12}} \end{align}$$ Assuming that the desired significance is 0.1, since $$p$$  value < 0.1,  $${{H}_{0}}\ \ :\ \ {{\beta }_{1}}=0$$  is rejected and it can be concluded that  $${{\beta }_{1}}$$  is significant. The test for $${{\beta }_{2}}$$  can be carried out in a similar manner. This result is shown in Figure SequentialSshot.

Confidence Intervals in Multiple Linear Regression
Calculation of confidence intervals for multiple linear regression models are similar to those for simple linear regression models explained in Chapter 4.

Confidence Interval on Regression Coefficients
A 100( $$1-\alpha $$ ) percent confidence interval on the regression coefficient, $${{\beta }_{j}}$$, is obtained as follows:


 * $${{\hat{\beta }}_{j}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt$$

The confidence interval on the regression coefficients are displayed in the Regression Information table under the Low CI and High CI columns as shown in Figure RegrInfoSshot. Confidence Interval on Fitted Values, $${{\hat{y}}_{i}}$$ A 100( $$1-\alpha $$ ) percent confidence interval on any fitted value, $${{\hat{y}}_{i}}$$, is given by:


 * $${{\hat{y}}_{i}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{{\hat{\sigma }}}^{2}}x_{i}^{\prime }{{({{X}^{\prime }}X)}^{-1}}{{x}_{i}}}$$


 * where:


 * $${{x}_{i}}=\left[ \begin{matrix}

1 \\   {{x}_{i1}}  \\ . \\   .  \\   .  \\   {{x}_{ik}}  \\ \end{matrix} \right]$$

In Example 1 (Section 5.MatrixApproach), the fitted value corresponding to the fifth observation was calculated as $${{\hat{y}}_{5}}=266.3$$. The 90% confidence interval on this value can be obtained as shown in Figure CIfittedvalueSshot. The values of 47.3 and 29.9 used in the figure are the values of the predictor variables corresponding to the fifth observation in Table 5.1.



Confidence Interval on New Observations
As explained in Chapter 4, the confidence interval on a new observation is also referred to as the prediction interval. The prediction interval takes into account both the error from the fitted model and the error associated with future observations. A 100( $$1-\alpha $$ ) percent confidence interval on a new observation, $${{\hat{y}}_{p}}$$, is obtained as follows:


 * $${{\hat{y}}_{p}}\pm {{t}_{\alpha /2,n-(k+1)}}\sqrt{{{{\hat{\sigma }}}^{2}}(1+x_{p}^{\prime }{{({{X}^{\prime }}X)}^{-1}}{{x}_{p}})}$$

where:


 * $${{x}_{p}}=\left[ \begin{matrix}

1 \\   {{x}_{p1}}  \\ . \\   .  \\   .  \\   {{x}_{pk}}  \\ \end{matrix} \right]$$

$${{x}_{p1}}$$ ,..., $${{x}_{pk}}$$  are the levels of the predictor variables at which the new observation,  $${{\hat{y}}_{p}}$$, needs to be obtained.



In multiple linear regression, prediction intervals should only be obtained at the levels of the predictor variables where the regression model applies. In the case of multiple linear regression it is easy to miss this. Having values lying within the range of the predictor variables does not necessarily mean that the new observation lies in the region to which the model is applicable. For example, consider Figure JointRegion where the shaded area shows the region to which a two variable regression model is applicable. The point corresponding to $$p$$ th level of first predictor variable,  $${{x}_{1}}$$, and  $$p$$ th level of the second predictor variable,  $${{x}_{2}}$$ , does not lie in the shaded area, although both of these levels are within the range of the first and second predictor variables respectively. In this case, the regression model is not applicable at this point.

Measures of Model Adequacy
As in the case of simple linear regression, analysis of a fitted multiple linear regression model is important before inferences based on the model are undertaken. This section presents some techniques that can be used to check the appropriateness of the multiple linear regression model.

Coefficient of Multiple Determination, $${{R}^{2}}$$
The coefficient of multiple determination is similar to the coefficient of determination used in the case of simple linear regression. It is defined as:


 * $$\begin{align}

& {{R}^{2}}= & \frac{S{{S}_{R}}}{S{{S}_{T}}} \\ & = & 1-\frac{S{{S}_{E}}}{S{{S}_{T}}} \end{align}$$

$${{R}^{2}}$$ indicates the amount of total variability explained by the regression model. The positive square root of $${{R}^{2}}$$  is called the multiple correlation coefficient and measures the linear association between  $$Y$$  and the predictor variables,  $${{x}_{1}}$$,  $${{x}_{2}}$$ ... $${{x}_{k}}$$.

The value of $${{R}^{2}}$$  increases as more terms are added to the model, even if the new term does not contribute significantly to the model. An increase in the value of $${{R}^{2}}$$  cannot be taken as a sign to conclude that the new model is superior to the older model. A better statistic to use is the adjusted $${{R}^{2}}$$  statistic defined as follows:


 * $$\begin{align}

& R_{adj}^{2}= & 1-\frac{M{{S}_{E}}}{M{{S}_{T}}} \\ & = & 1-\frac{S{{S}_{E}}/(n-(k+1))}{S{{S}_{T}}/(n-1)} \\ & = & 1-(\frac{n-1}{n-(k+1)})(1-{{R}^{2}}) \end{align}$$

The adjusted $${{R}^{2}}$$  only increases when significant terms are added to the model. Addition of unimportant terms may lead to a decrease in the value of $$R_{adj}^{2}$$.

In DOE++, $${{R}^{2}}$$ and  $$R_{adj}^{2}$$  values are displayed as R-sq and R-sq(adj), respectively. Other values displayed along with these values are S, PRESS and R-sq(pred). As explained in Chapter 4, the value of S is the square root of the error mean square, $$M{{S}_{E}}$$, and represents the "standard error of the model."

PRESS is an abbreviation for prediction error sum of squares. It is the error sum of squares calculated using the PRESS residuals in place of the residuals, $${{e}_{i}}$$, in Eqn. (ErrorSumofSquares). The PRESS residual, $${{e}_{(i)}}$$, for a particular observation,  $${{y}_{i}}$$ , is obtained by fitting the regression model to the remaining observations. Then the value for a new observation, $${{\hat{y}}_{p}}$$, corresponding to the observation in question,  $${{y}_{i}}$$ , is obtained based on the new regression model. The difference between $${{y}_{i}}$$  and  $${{\hat{y}}_{p}}$$  gives  $${{e}_{(i)}}$$. The PRESS residual, $${{e}_{(i)}}$$, can also be obtained using  $${{h}_{ii}}$$ , the diagonal element of the hat matrix,  $$H$$ , as follows:


 * $${{e}_{(i)}}=\frac{1-{{h}_{ii}}}$$

R-sq(pred), also referred to as prediction $${{R}^{2}}$$, is obtained using PRESS as shown next:


 * $$R_{pred}^{2}=1-\frac{PRESS}{S{{S}_{T}}}$$

The values of R-sq, R-sq(adj) and S are indicators of how well the regression model fits the observed data. The values of PRESS and R-sq(pred) are indicators of how well the regression model predicts new observations. For example, higher values of PRESS or lower values of R-sq(pred) indicate a model that predicts poorly. Figure RSqadjSshot. shows these values for the data in Table 5.1. The values indicate that the regression model fits the data well and also predicts well.

Residual Analysis
Plots of residuals, $${{e}_{i}}$$, similar to the ones discussed in the previous chapter for simple linear regression, are used to check the adequacy of a fitted multiple linear regression model. The residuals are expected to be normally distributed with a mean of zero and a constant variance of $${{\sigma }^{2}}$$. In addition, they should not show any patterns or trends when plotted against any variable or in a time or run-order sequence. Residual plots may also be obtained using standardized and studentized residuals. Standardized residuals, $${{d}_{i}}$$, are obtained using the following equation:


 * $$\begin{align}

& {{d}_{i}}= & \frac{\sqrt} \\ & = & \frac{\sqrt{M{{S}_{E}}}} \end{align}$$



Standardized residuals are scaled so that the standard deviation of the residuals is approximately equal to one. This helps to identify possible outliers or unusual observations. However, standardized residuals may understate the true residual magnitude, hence studentized residuals, $${{r}_{i}}$$, are used in their place. Studentized residuals are calculated as follows:


 * $$\begin{align}

& {{r}_{i}}= & \frac{\sqrt{{{{\hat{\sigma }}}^{2}}(1-{{h}_{ii}})}} \\ & = & \frac{\sqrt{M{{S}_{E}}(1-{{h}_{ii}})}} \end{align}$$

where $${{h}_{ii}}$$  is the  $$i$$ th diagonal element of the hat matrix,  $$H$$. External studentized (or the studentized deleted) residuals may also be used. These residuals are based on the PRESS residuals mentioned in Section 5.Rsquare. The reason for using the external studentized residuals is that if the $$i$$ th observation is an outlier, it may influence the fitted model. In this case, the residual $${{e}_{i}}$$  will be small and may not disclose that  $$i$$ th observation is an outlier. The external studentized residual for the $$i$$ th observation,  $${{t}_{i}}$$, is obtained as follows:


 * $${{t}_{i}}={{e}_{i}}{{\left[ \frac{n-k}{S{{S}_{E}}(1-{{h}_{ii}})-e_{i}^{2}} \right]}^{0.5}}$$

Residual values for the data of Table 5.1 are shown in Figure ResidualSshot. These values are available using the Diagnostics icon in the Control Panel. Standardized residual plots for the data are shown in Figures Res1NPP to ResVsRuns. DOE++ compares the residual values to the critical values on the $$t$$  distribution for studentized and external studentized residuals. For other residuals the normal distribution is used. For example, for the data in Table 5.1, the critical values on the $$t$$  distribution at a significance of 0.1 are  $${{t}_{0.05,14}}=1.761$$  and  $$-{{t}_{0.05,14}}=-1.761$$  (as calculated in Example 3, Section 5.tTest). The studentized residual values corresponding to the 3rd and 17th observations lie outside the critical values. Therefore, the 3rd and 17th observations are outliers. This can also be seen on the residual plots in Figures ResVsFitted and ResVsRuns.

Outlying $$x$$ Observations
Residuals help to identify outlying $$y$$  observations. Outlying $$x$$  observations can be detected using leverage. Leverage values are the diagonal elements of the hat matrix, $${{h}_{ii}}$$. The $${{h}_{ii}}$$  values always lie between 0 and 1. Values of $${{h}_{ii}}$$  greater than  $$2(k+1)/n$$  are considered to be indicators of outlying  $$x$$  observations.

Influential Observations Detection
Once an outlier is identified, it is important to determine if the outlier has a significant effect on the regression model. One measure to detect influential observations is Cook's distance measure which is computed as follows:


 * $${{D}_{i}}=\frac{r_{i}^{2}}{(k+1)}\left[ \frac{(1-{{h}_{ii}})} \right]$$

To use Cook's distance measure, the $${{D}_{i}}$$  values are compared to percentile values on the  $$F$$  distribution with  $$(k+1,n-(k+1))$$  degrees of freedom. If the percentile value is less than 10 or 20 percent, then the $$i$$ th case has little influence on the fitted values. However, if the percentile value is close to 50 percent or greater, the $$i$$ th case is influential, and fitted values with and without the  $$i$$ th case will differ substantially.[Kutner]

Example 6

Cook's distance measure can be calculated as shown next. The distance measure is calculated for the first observation of the data in Table 5.1. The remaining values along with the leverage values are shown in Figure CookSshot. The standardized residual corresponding to the first observation is:










 * $$\begin{align}

& {{r}_{1}}= & \frac{\sqrt{M{{S}_{E}}(1-{{h}_{11}})}} \\ & = & \frac{1.3127}{\sqrt{30.3(1-0.2755)}} \\ & = & 0.2804 \end{align}$$

Cook's distance measure for the first observation can now be calculated as:


 * $$\begin{align}

& {{D}_{1}}= & \frac{r_{1}^{2}}{(k+1)}\left[ \frac{(1-{{h}_{11}})} \right] \\ & = & \frac{(2+1)}\left[ \frac{0.2755}{(1-0.2755)} \right] \\ & = & 0.01 \end{align}$$

The 50th percentile value for $${{F}_{3,14}}$$  is 0.83. Since all $${{D}_{i}}$$  values are less than this value there are no influential observations.



Lack-of-Fit Test
The lack-of-fit test for simple linear regression discussed in Chapter 4 may also be applied to multiple linear regression to check the appropriateness of the fitted response surface and see if a higher order model is required. Data for $$m$$  replicates may be collected as follows for all  $$n$$  levels of the predictor variables:


 * $$\begin{align}

& & {{y}_{11}},{{y}_{12}},....,{{y}_{1m}}\text{     }m\text{ repeated observations at the first level } \\ & & {{y}_{21}},{{y}_{22}},....,{{y}_{2m}}\text{     }m\text{ repeated observations at the second level} \\ & & ... \\  &  & {{y}_{i1}},{{y}_{i2}},....,{{y}_{im}}\text{       }m\text{ repeated observations at the }i\text{th level} \\ & & ... \\  &  & {{y}_{n1}},{{y}_{n2}},....,{{y}_{nm}}\text{    }m\text{ repeated observations at the }n\text{th level } \end{align}$$

The sum of squares due to pure error, $$S{{S}_{PE}}$$, can be obtained as discussed in the previous chapter as:


 * $$S{{S}_{PE}}=\underset{i=1}{\overset{n}{\mathop \sum }}\,\underset{j=1}{\overset{m}{\mathop \sum }}\,{{({{y}_{ij}}-{{\bar{y}}_{i}})}^{2}}$$

The number of degrees of freedom associated with $$S{{S}_{PE}}$$  are:


 * $$dof(S{{S}_{PE}})=nm-n$$

Knowing $$S{{S}_{PE}}$$, sum of squares due to lack-of-fit,  $$S{{S}_{LOF}}$$ , can be obtained as:


 * $$S{{S}_{LOF}}=S{{S}_{E}}-S{{S}_{PE}}$$

The number of degrees of freedom associated with $$S{{S}_{LOF}}$$  are:

$$\begin{align} & dof(S{{S}_{LOF}})= & dof(S{{S}_{E}})-dof(S{{S}_{PE}}) \\ & = & n-(k+1)-(nm-n) \end{align}$$

The test statistic for the lack-of-fit test is:


 * $$\begin{align}

& {{F}_{0}}= & \frac{S{{S}_{LOF}}/dof(S{{S}_{LOF}})}{S{{S}_{PE}}/dof(S{{S}_{PE}})} \\ & = & \frac{M{{S}_{LOF}}}{M{{S}_{PE}}} \end{align}$$

Polynomial Regression Models
Polynomial regression models are used when the response is curvilinear. The equation shown next presents a second order polynomial regression model with one predictor variable:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{11}}x_{1}^{2}+\epsilon $$

Usually, coded values are used in these models. Values of the variables are coded by centering or expressing the levels of the variable as deviations from the mean value of the variable and then scaling or dividing the deviations obtained by half of the range of the variable.


 * $$coded\text{ }value=\frac{actual\text{ }value-mean}{half\text{ }of\text{ }range}$$

The reason for using coded predictor variables is that many times $$x$$  and  $${{x}^{2}}$$  are highly correlated and, if uncoded values are used, there may be computational difficulties while calculating the  $${{({{X}^{\prime }}X)}^{-1}}$$  matrix to obtain the estimates,  $$\hat{\beta }$$, of the regression coefficients using Eqn. (LeastSquareEstimate).

Qualitative Factors
The multiple linear regression model also supports the use of qualitative factors. For example, gender may need to be included as a factor in a regression model. One of the ways to include qualitative factors in a regression model is to employ indicator variables. Indicator variables take on values of 0 or 1. For example, an indicator variable may be used with a value of 1 to indicate female and a value of 0 to indicate male.


 * $${{x}_{1}}=\{\begin{array}{*{35}{l}}

1\text{     Female}  \\ 0\text{     Male}  \\ \end{array}$$

In general ( $$n-1$$ ) indicator variables are required to represent a qualitative factor with $$n$$  levels. As an example, a qualitative factor representing three types of machines may be represented as follows using two indicator variables:


 * $$\begin{align}

& {{x}_{1}}= & 1,\text{  }{{x}_{2}}=0\text{     Machine Type I} \\ & {{x}_{1}}= & 0,\text{  }{{x}_{2}}=1\text{     Machine Type II} \\ & {{x}_{1}}= & 0,\text{  }{{x}_{2}}=0\text{     Machine Type III} \end{align}$$

An alternative coding scheme for this example is to use a value of -1 for all indicator variables when representing the last level of the factor:


 * $$\begin{align}

& {{x}_{1}}= & 1,\text{  }{{x}_{2}}=0\text{           Machine Type I} \\ & {{x}_{1}}= & 0,\text{  }{{x}_{2}}=1\text{           Machine Type II} \\ & {{x}_{1}}= & -1,\text{  }{{x}_{2}}=-1\text{     Machine Type III} \end{align}$$

Indicator variables are also referred to as dummy variables or binary variables.

Example 7

Consider data from two types of reactors of a chemical process shown in Table 5.3 where the yield values are recorded for various levels of factor $${{x}_{1}}$$. Assuming there are no interactions between the reactor type and $${{x}_{1}}$$, a regression model can be fitted to this data as shown next. Since the reactor type is a qualitative factor with two levels, it can be represented by using one indicator variable. Let $${{x}_{2}}$$  be the indicator variable representing the reactor type, with 0 representing the first type of reactor and 1 representing the second type of reactor.


 * $${{x}_{2}}=\{\begin{array}{*{35}{l}}

0\text{     Reactor Type I}  \\ 1\text{     Reactor Type II}  \\ \end{array}$$



Data entry in DOE++ for this example is shown in Figure IndiVarDesignSshot. The regression model for this data is:


 * $$Y={{\beta }_{0}}+{{\beta }_{1}}{{x}_{1}}+{{\beta }_{2}}{{x}_{2}}+\epsilon $$

The $$X$$  and  $$y$$  matrices for the given data are:



The estimated regression coefficients for the model can be obtained using Eqn. (LeastSquareEstimate) as:


 * $$\begin{align}

& \hat{\beta }= & {{({{X}^{\prime }}X)}^{-1}}{{X}^{\prime }}y \\ & = & \left[ \begin{matrix} 153.7 \\   2.4  \\   -27.5  \\ \end{matrix} \right] \end{align}$$

Therefore, the fitted regression model is:


 * $$\hat{y}=153.7+2.4{{x}_{1}}-27.5{{x}_{2}}$$

Note that since $${{x}_{2}}$$  represents a qualitative predictor variable, the fitted regression model cannot be plotted simultaneously against  $${{x}_{1}}$$  and  $${{x}_{2}}$$  in a two dimensional space (because the resulting surface plot will be meaningless for the dimension in  $${{x}_{2}}$$ ). To illustrate this, a scatter plot of the data in Table 5.3 against $${{x}_{2}}$$  is shown in Figure IndiVarScatterPlot. It can be noted that, in the case of qualitative factors, the nature of the relationship between the response (yield) and the qualitative factor (reactor type) cannot be categorized as linear, or quadratic, or cubic, etc. The only conclusion that can be arrived at for these factors is to see if these factors contribute significantly to the regression model. This can be done by employing the partial $$F$$  test of Section 5.FtestPartial (using the extra sum of squares of the indicator variables representing these factors). The results of the test for the present example are shown in the ANOVA table of Figure IndiVarResultsSshot. The results show that $${{x}_{2}}$$  (reactor type) contributes significantly to the fitted regression model.

Multicollinearity
At times the predictor variables included in a multiple linear regression model may be found to be dependent on each other. Multicollinearity is said to exist in a multiple regression model with strong dependencies between the predictor variables. Multicollinearity affects the regression coefficients and the extra sum of squares of the predictor variables. In a model with multicollinearity the estimate of the regression coefficient of a predictor variable depends on what other predictor variables are included the model. The dependence may even lead to change in the sign of the regression coefficient. In a such models, an estimated regression coefficient may not be found to be significant individually (when using the $$t$$  test on the individual coefficient or looking at the  $$p$$  value) even though a statistical relation is found to exist between the response variable and the set of the predictor variables (when using the  $$F$$  test for the set of predictor variables). Therefore, you should be careful while looking at individual predictor variables in models that have multicollinearity. Care should also be taken while looking at the extra sum of squares for a predictor variable that is correlated with other variables. This is because in models with multicollinearity the extra sum of squares is not unique and depends on the other predictor variables included in the model.





Multicollinearity can be detected using the variance inflation factor (abbreviated $$VIF$$ ). $$VIF$$ for a coefficient  $${{\beta }_{j}}$$  is defined as:

$$VIF=\frac{1}{(1-R_{j}^{2})}$$

where $$R_{j}^{2}$$  is the coefficient of multiple determination resulting from regressing the  $$j$$ th predictor variable,  $${{x}_{j}}$$, on the remaining  $$k$$ -1 predictor variables. Mean values of $$VIF$$  considerably greater than 1 indicate multicollinearity problems. A few methods of dealing with multicollinearity include increasing the number of observations in a way designed to break up dependencies among predictor variables, combining the linearly dependent predictor variables into one variable, eliminating variables from the model that are unimportant or using coded variables.

Example 8

Variance inflation factors can be obtained for the data in Table 5.1. To calculate the variance inflation factor for $${{x}_{1}}$$,  $$R_{1}^{2}$$  has to be calculated. $$R_{1}^{2}$$ is the coefficient of determination for the model when  $${{x}_{1}}$$  is regressed on the remaining variables. In the case of this example there is just one remaining variable which is $${{x}_{2}}$$. If a regression model is fit to the data, taking $${{x}_{1}}$$  as the response variable and  $${{x}_{2}}$$  as the predictor variable, then the design matrix and the vector of observations are:


 * $${{X}_}=\left[ \begin{matrix}

1 & 29.1 \\   1 & 29.3  \\   . & .  \\   . & .  \\   . & .  \\   1 & 32.9  \\ \end{matrix} \right]\text{     }{{y}_}=\left[ \begin{matrix} 41.9 \\   43.4  \\   .  \\   .  \\   .  \\   77.8  \\ \end{matrix} \right]$$

The regression sum of squares for this model can be obtained using Eqn. (RegressionSumofSquares) as:


 * $$\begin{align}

& S{{S}_{R}}= & y_^{\prime }\left[ {{H}_}-(\frac{1}{n})J \right]{{y}_} \\ & = & 1988.6 \end{align}$$

where $${{H}_}$$  is the hat matrix (and is calculated using  $${{H}_}={{X}_}{{(X_^{\prime }{{X}_})}^{-1}}X_^{\prime }$$ ) and  $$J$$  is the matrix of ones. The total sum of squares for the model can be calculated using Eqn. (TotalSumofSquares) as:


 * $$\begin{align}

& S{{S}_{T}}= & {{y}^{\prime }}\left[ I-(\frac{1}{n})J \right]y \\ & = & 2182.9 \end{align}$$

where $$I$$  is the identity matrix. Therefore:


 * $$\begin{align}

& R_{1}^{2}= & \frac{S{{S}_{R}}}{S{{S}_{T}}} \\ & = & \frac{1988.6}{2182.9} \\ & = & 0.911 \end{align}$$

Then the variance inflation factor for $${{x}_{1}}$$  is:


 * $$\begin{align}

& VI{{F}_{1}}= & \frac{1}{(1-R_{1}^{2})} \\ & = & \frac{1}{1-0.911} \\ & = & 11.2 \end{align}$$

The variance inflation factor for $${{x}_{2}}$$,  $$VI{{F}_{2}}$$ , can be obtained in a similar manner. In DOE++, the variance inflation factors are displayed in the VIF column of the Regression Information Table as shown in Figure VIFSshot. Since the values of the variance inflation factors obtained are considerably greater than 1, multicollinearity is an issue for the data in Table 5.1.