Least-Squares Regression

Elementary Statistics

MTH-161D | Spring 2025 | University of Portland

April 11, 2025

Objectives

Previously…

The Linear Model

A linear model is written as

\[ y = \beta_0 + \beta_1 x + \epsilon \]

where \(y\) is the outcome, \(x\) is the predictor, \(\beta_0\) is the intercept, and \(\beta_1\) is the slope. The notation \(\epsilon\) is the model’s error.

Notation:

We can use the sample statistics \(b_0\) and \(b_1\) as point estimates to infer the true value of the population parameters \(\beta_0\) and \(\beta_1\).

Examples of Best Fit Linear Models on Data

Sample data with their best fitting lines (top row) and their corresponding residual plots (bottom row).

Sample data with their best fitting lines (top row) and their corresponding residual plots (bottom row).

Idealized Examples of Residual Scatter Plots

Image Source: [Model Validation: Interpreting Residual Plots Daniel Hocking in R bloggers](https://www.r-bloggers.com/2011/07/model-validation-interpreting-residual-plots/){target=_blank}

Image Source: Model Validation: Interpreting Residual Plots Daniel Hocking in R bloggers

Terms:

Ordinary Least Squares Assumptions

Sum of Squared Error (SSE) and the Total Sum of Squares (TSS)

The Sum of Squared Error (SSE) is a metric of left-over variability in the \(y\) values if we know \(x\).

\[ SSE = (\epsilon_1)^2 + (\epsilon_2)^2 + \cdots (\epsilon_n)^2 \]

The Total Sum of Squares (SST) is a metric measure the variability in the \(y\) values by how far they tend to fall from their mean, \(\bar{y}\).

\[ SST = (y_1 - \bar{y})^2 + (y_2 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2 \]

where \(\bar{y} = \frac{1}{n} \left(y_1 + y_2 + \cdots + y_n \right)\), and \(n\) is the number of observations.

Minimizing the SSE

To find the best linear fit, we minimize the SSE.

\[ \begin{aligned} SSE & = (\epsilon_1)^2 + (\epsilon_2)^2 + \cdots (\epsilon_n)^2 \\ & = (y_1 - \hat{y_1})^2 + (y_2 - \hat{y_2})^2 + \cdots + (y_n - \hat{y_n})^2 \end{aligned} \]

Plugging-in the linear equation \(\hat{y} = b_0 + b_1 x\), we have

\[ SSE = (y_1 - (b_0 + b_1 x_1)^2 + (y_2 - (b_0 + b_1 x_2)^2 + \cdots + (y_n - (b_0 + b_1 x_n)^2. \]

Minimizing the above equation over all possible values of \(b_0\) and \(b_1\) is a calculus problem. Take the derivative of SSE with respect to \(b_1\), set it equal to zero, and solve for \(b_1\).

Long story short,

\[ \begin{aligned} b_1 = & \frac{s_y}{s_x} r \\ b_0 = & \bar{y} - b_1 \bar{x} \end{aligned} \]

where:

Least-Squares Example Visualization

Least-Squares Example Visualization: Shown here is some data (orange dots) and the best fit linear model (red line). You can try this [least-squares regression interactive demo](https://phet.colorado.edu/sims/html/least-squares-regression/latest/least-squares-regression_en.html){target=_blank} to visualize on how it works.

Least-Squares Example Visualization: Shown here is some data (orange dots) and the best fit linear model (red line). You can try this least-squares regression interactive demo to visualize on how it works.

Finding the Best Fit

To find the best fit linear model to data, we compute the slope and intercept by using the correlation and standard deviations.

\[ \begin{aligned} \text{mean of x} \longrightarrow & \bar{x} = \frac{1}{n} \left( x_1 + x_2 + \cdots + x_n \right) \\ \text{mean of y} \longrightarrow & \bar{y} = \frac{1}{n} \left( y_1 + y_2 + \cdots + y_n \right) \\ \text{standard deviation of x} \longrightarrow & s_x = \sqrt{\frac{1}{n-1} \left( \left(x_1 - \bar{x}\right)^2 + \left(x_2 - \bar{x}\right)^2 + \cdots + \left(x_n - \bar{x}\right)^2 \right)} \\ \text{standard deviation of y} \longrightarrow & s_y = \sqrt{\frac{1}{n-1} \left( \left(y_1 - \bar{y}\right)^2 + \left(y_2 - \bar{y}\right)^2 + \cdots + \left(y_n - \bar{y}\right)^2 \right)} \\ \text{correlation of x and y} \longrightarrow & r = \frac{\left( \left(x_1 - \bar{x} \right)\left(y_1 - \bar{y} \right) + \cdots + \left(x_n - \bar{x} \right)\left(y_n - \bar{y}\right) \right)}{\sqrt{\left( \left(x_1 - \bar{x}\right)^2 + \cdots + \left(x_n - \bar{x}\right)^2 \right) \left( \left(y_1 - \bar{y}\right)^2 + \cdots + \left(y_n - \bar{y}\right)^2 \right) }} \\ \text{best fit slope} \longrightarrow & b_1 = \frac{s_y}{s_x} r \\ \text{best fit intercept} \longrightarrow & b_0 = \bar{y} - b_1 \bar{x} \end{aligned} \]

We typically use software –such as R– to compute the above values.

Interpreting the Slope Estimate

\[ b_1 = \frac{s_y}{s_x} r \]

Measuring the Strength of the Linear Fit

The coefficient of determination can then be calculated as

\[ R^2 = \frac{SST - SSE}{SST} = 1 - \frac{SSE}{SST} \]

where

\[ SSE = (\epsilon_1)^2 + (\epsilon_2)^2 + \cdots (\epsilon_n)^2 \hspace{10px} \text{ and } \hspace{10px} SST = (y_1 - \bar{y})^2 + (y_2 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2. \]

The range of \(R^2\) is from 0 to 1. \(R^2\) is the a measure of how well the linear regression fits the data.

Interpretation:

In the case for a linear model with one predictor and one outcome, the relationship between the correlation and the coefficient of determination is \(R^2 = r^2\).

Outliers in Linear Regression

Three plots, each with a least squares line and corresponding residual plot. Each dataset has at least one outlier.

Three plots, each with a least squares line and corresponding residual plot. Each dataset has at least one outlier.

Outliers in Linear Regression

Types of outliers.

We must be cautious on removing outliers in our modeling. Sometimes outliers are interesting cases that might be worth investigating and it might even make a model much better.

Try out this least-squares regression interactive demo to play around with outliers in least squares regression.

Activity: Assessing Residuals of a Linear Model

  1. Make sure you have a copy of the F 4/11 Worksheet. This will be handed out physically. This worksheet will be available on Moodle after class.
  2. Work on your worksheet by yourself for 10 minutes. Please read the instructions carefully. Ask questions if anything need clarifications.
  3. Get together with another student.
  4. Discuss your results.
  5. Submit your worksheet on Moodle as a .pdf file.

References

Diez, D. M., Barr, C. D., & Çetinkaya-Rundel, M. (2012). OpenIntro statistics (4th ed.). OpenIntro. https://www.openintro.org/book/os/