13.6 Significance Testing of Each Variable
Within a multiple regression model, we may want to know whether a particular \(x\)-variable is making a useful contribution to the model. That is, given the presence of the other \(x\)-variables in the model, does a particular \(x\)-variable help us predict or explain the \(y\)-variable?
As an example, if we have this model \(\begin{equation} y=\beta _{0}+\beta _{1}x_{1}+\beta_{2}x_{2}+\beta_{3}x_{3}+\epsilon. \end{equation},\) to determine whether variable \(x_1\) is a useful predictor variable in this model, we could test
\[ \begin{align*} \nonumber H_{0}&:\beta_{1}=0 \\ \nonumber H_{A}&:\beta_{1}\neq 0. \end{align*} \] If the null hypothesis above were the case, then a change in the value of \(x_1\) would not change \(y\), so \(y\) and \(x_1\) are not linearly related. When we cannot reject the null hypothesis above, we should say that we do not need variable \(x_1\) in the model given that variables \(x_2\) and \(x_3\) will remain in the model.
- Multiple linear regression, in contrast to simple linear regression, involves multiple predictors and so testing each variable can quickly become complicated. For example, suppose we apply two separate tests for two predictors, say \(x_1\) and \(x_2\), and both tests have high \(p\)-values. One test suggests \(x_1\) is not needed in a model with all the other predictors included, while the other test suggests \(x_2\) is not needed in a model with all the other predictors included. But, this doesn’t necessarily mean that both \(x_1\) and \(x_2\) are not needed in a model with all the other predictors included. It may well turn out that we would do better to omit either \(x_1\) or \(x_2\) from the model, but not both.