The Palindrome

The Palindrome

Share this post

The Palindrome
The Palindrome
Linear regression and the least squares problem
Copy link
Facebook
Email
Notes
More

Linear regression and the least squares problem

Optimizing linear regression by hand

Tivadar Danka's avatar
Tivadar Danka
Dec 12, 2023
∙ Paid
2

Share this post

The Palindrome
The Palindrome
Linear regression and the least squares problem
Copy link
Facebook
Email
Notes
More
Share

This post is the next chapter of my upcoming Mathematics of Machine Learning book, available in early access.

New chapters are available for the premium subscribers of The Palindrome as well, but there are 40+ chapters (~450 pages) available exclusively for members of the early access.

The Palindrome is aims to democratize high-quality education; for that, I need your support. Upgrade to a paid subscription for this premium post!


In the previous post, we have taken our first step in machine learning and trained our very first linear regressor.

(Note: this post is a direct continuation of the previous one, so be sure to check that if you are getting lost in the details.)

This time, we are continuing on the same path; there’s much to learn about linear regression.

Using gradient descent for linear regression is like shooting a sparrow with a cannonball. (Especially for a single variable model.) Why? Because the loss function is so simple that we can easily find an analytic solution.

Keep reading with a 7-day free trial

Subscribe to The Palindrome to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Tivadar Danka
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More