This book is in Open Review. We want your feedback to make the book better for you and other students. You may annotate some text by selecting it with the cursor and then click "Annotate" in the pop-up menu. You can also see the annotations of others: click the arrow in the upper right hand corner of the page

7.1 Hypothesis Tests and Confidence Intervals for a Single Coefficient

We will first discuss how to compute standard errors, test hypotheses and construct confidence intervals for a single regression coefficient \(\beta_j\) in a multiple regression model. The basic idea is summarized in Key Concept 7.1.

Key Concept 7.1

Testing the Hypothesis \(\beta_j = \beta_{j,0}\)
Against the Alternative \(\beta_j \neq \beta_{j,0}\)

  1. Compute the standard error of \(\hat{\beta_j}\).
  2. Compute the \(t\)-statistic, \[t^{act} = \frac{\hat{\beta}_j - \beta_{j,0}} {SE(\hat{\beta_j})}\].
  3. Compute the \(p\)-value, \[p\text{-value} = 2 \Phi(-|t^{act}|)\]
where \(t^{act}\) is the value of the \(t\)-statistic actually computed. Reject the hypothesis at the \(5\%\) significance level if the \(p\)-value is less than \(0.05\) or, equivalently, if \(|t^{act}| > 1.96\). The standard error and (typically) the \(t\)-statistic and the corresponding \(p\)-value for testing \(\beta_j = 0\) are computed automatically by suitable R functions, e.g., by summary.

Testing a single hypothesis about the significance of a coefficient in the multiple regression model proceeds similarly to the process in the simple regression model.

You can easily see this by inspecting the coefficient summary of the regression model

\[ TestScore = \beta_0 + \beta_1 \times STR \beta_2 \times english + u \]

already discussed in Chapter 6. Let us review this:


model <- lm(score ~ STR + english, data = CASchools)
coeftest(model, vcov. = vcovHC, type = "HC1")
#> 
#> t test of coefficients:
#> 
#>               Estimate Std. Error  t value Pr(>|t|)    
#> (Intercept) 686.032245   8.728225  78.5993  < 2e-16 ***
#> STR          -1.101296   0.432847  -2.5443  0.01131 *  
#> english      -0.649777   0.031032 -20.9391  < 2e-16 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

You can verify that these quantities are computed as in the simple regression model by manually calculating the \(t\)-statistics or \(p\)-values using the provided output above and using R as a calculator.

For example, using the definition of the \(p\)-value for a two-sided test as given in Key Concept 7.1, we can confirm the \(p\)-value for a test of the hypothesis that the coefficient \(\beta_1\), the coefficient on size, to be approximately zero.

# compute two-sided p-value
2 * (1 - pt(abs(coeftest(model, vcov. = vcovHC, type = "HC1")[2, 3]),
            df = model$df.residual))
#> [1] 0.01130921

Key Concept 7.2

Confidence Intervals for a Single Coefficient in Multiple Regression

A \(95\%\) two-sided confidence interval for the coefficient \(\beta_j\) is an interval that contains the true value of \(\beta_j\) with a \(95 \%\) probability; that is, it contains the true value of \(\beta_j\) in \(95 \%\) of all repeated samples. Equivalently, it is the set of values of \(\beta_j\) that cannot be rejected by a \(5 \%\) two-sided hypothesis test. When the sample size is large, the \(95 \%\) confidence interval for \(\beta_j\) is \[\left[\hat{\beta_j}- 1.96 \times SE(\hat{\beta}_j), \hat{\beta_j} + 1.96 \times SE(\hat{\beta_j})\right].\]