This post is the next chapter of my upcoming Mathematics of Machine Learning book, available in early access.
New chapters are available for the premium subscribers of The Palindrome as well, but there are 40+ chapters (~450 pages) available exclusively for members of the early access.
In the previous post, we have taken our first step in machine learning and trained our very first linear regressor.
(Note: this post is a direct continuation of the previous one, so be sure to check that if you are getting lost in the details.)
This time, we are continuing on the same path; there’s much to learn about linear regression.
Using gradient descent for linear regression is like shooting a sparrow with a cannonball. (Especially for a single variable model.) Why? Because the loss function is so simple that we can easily find an analytic solution.
Keep reading with a 7-day free trial
Subscribe to The Palindrome to keep reading this post and get 7 days of free access to the full post archives.