Gradient Descent on Linear Regression

Gradient descent is one of the more important optimisation techniques. It is used in a wide variety of machine learning techniques due to it’s flexibility in being able to be applied to any differentiable objective function. With each iteration steps are taken in the direction of the negative gradient until converging to a local minimum. As the algorithm approaches the local minimum the jumps become smaller until a specified tolerance is met, or the maximum iterations. To understand how this works gradient descent is applied to a common method, simple linear regression.

The simple linear regression model is of the form:

where

The objective is to find the parameters (\boldsymbol{\beta}) such that they minimise the mean squared error.

This is a good problem to start with since we know the analytical solution is given by

Gradient descent

The objective is to achieve the same result using gradient descent. Gradient descent works by updating the parameters with each iteration in the direction of negative gradient i.e.

where is the learning rate and

The learning rate is to ensure we don’t jump too far with each iteration and rather some proportion of the gradient, otherwise we could end up overshooting the local minimum taking mauch longer to converge or not find the optimal solution at all. Apply this to the problem above, we’ll initialise our values for to something sensible e.g. . I have chosen with 1000 iterations which is a reasonable learning rate to start with for this problem. It’s worth trying different values of to see how it changes convergence. The algorithm is setup as