Gradient Descent

Before we discuss how we can find the optimal weights and the optimal bias in a linear regression setting, let us take a step back and consider how we can find the value of variable undefined that minimizes the function undefined .

The equation undefined depicts a parabola. From visual inspection we can determine, that the undefined is lowest when undefined . is exactly 0.

050010001500200025003000 -80-60-40-20020406080
undefined
undefined

In machine learning we rarely have the luxury of being able to visually find the optimal solution. Our function is usually dependend on thousands or millions of features and that is not something that we can visualize. We need to apply an algorithmic procedure, that finds the minimum automatically. We start the algorithm by assigning undefined a random value. In the example below we picked 55.

050010001500200025003000 -80-60-40-20020406080
undefined
undefined

Next we calculate the derivative of undefined with respect to undefined . Using the rules of basic calculus we derive undefined . The slope at our starting point is therefore 110. We can draw the tangent line at the starting point to visualize the derivative.

050010001500200025003000 -80-60-40-20020406080
undefined
undefined

The derivative shows us the direction of steepest descent, or simply put the derivative tells us in what direction we have to change undefined if we want to reduce undefined . The gradient descent algorithm utilizes that directions and simply subtract the derivative undefined from undefined . Gradient descent is an iterative algorithm. That means that we keep calculating the derivative undefined and updating the variable undefined until some criterion is met. For example once the change in undefined is below a certain threshhold, we can assume that we are very close to the minimum.

While the derivative gives us the direction in which should take a step, the derivative does not give us the size of the step. For that purpose we use a variable undefined , also called the learning rate. The learning rate scales the derivative by multiplying the direction with a value that usually lies between 0.1 and 0.001. Larger values of the learning rate could make the algorithm diverge. That would mean that undefined would get larger and larger and never get close to the minimum. While too low values would slow down the trainig process dramatically.

Info

At each time step undefined of the gradient descent algorithm we update the variable undefined , until undefined converges to the miminum.

undefined

Below you can play with an interactive example to get some intuition regarding the gradient descent algorithm. Each click on the play button takes a single gradient descent step, based on the parameters that you can change with the sliders.

050010001500200025003000 -80-60-40-20020406080
undefined
undefined
Derivative: 110.00
undefined

0.01

undefined

55.00

You can learn several things about gradient descent if you play with the example.

  1. If you try positive and negative undefined values you will observe that the sign of the derivative changes based on the sign of the location of undefined . That behaviour makes sure that we distract negative values from undefined when undefined is negative and we distract positive values from undefined when undefined is positive. No matter where we start, the algorithm always pushes the variable towards the minimum.
  2. If you try gradient descent with an undefined of 1.01 you will observe that the algorithm starts to diverge. Picking the correct learning rate is an extremely usefull skill and is generally on of the first things to tweak when you want your algorithm to perform better. In fact undefined is one of the so called hyperparameters. A hyperparamter is a parameter that is set by the programmer that influences the learning of the parameters that you are truly interested in (like undefined and undefined ).
  3. You should also notice the decrease of the magnitude of the derivative when we start getting closer and closer to the optimal value, whiel the slope of the tangent gets flatter and flatter. This natural behaviour makes sure that we take smaller and smaller steps as we start approaching the optimum. This also means that gradient descent does not find an optimal value for undefined but an approximative one. In many cases it is sufficient to be close enough to the optimal value.

While the gradient descent algorithm is the de facto standard in deep learning, it has some limitations. Only when we are dealing with a convex function, we have a guarantee that the algorithm will converge to the global optimum. A convex function is like the parabola above, a function that is shaped like a "bowl". Such a "bowl" shaped function allows the variable to move towards the minimum without any barriers.

Below is the graph for the function undefined , a non convex function. We start at the undefined position with a value of 6. If you apply gradient several times (arrow button) you will notice that the ball gets stuck in the local minimum and will thus never keep going into the direction of the global minimum. This is due to the fact, that at that local minimum point the derivative corresponds to 0 and the gradient descent algorithm breaks down. You could move the slider below the graph and place the ball to the left of 0 and observe that the ball will keep going and going further down.

-40-30-20-1001020304050 -3-2-10123456
undefined
undefined
undefined

6.00

This behaviour has several implications that we should discuss. First, the starting position of the variable matters and might have an impact on the performance. Second, the following question arises: "why do deep learning researchers and practicioners use gradient descent, if the neural network function is not convex and there is a chance that the algorithm will get stuck in a local minimum?". Simply put, because it works exceptionally well in practice. Additionally, we rarely use the "traditional" gradient descent algorithm in practice. Over time, researchers discovered that the algorithm can be improved by such ideas as "momentum", which keeps the speed of gradient descent over many iterations and might thus jump over the local minimum. We will cover those ideas later, for now lets focus on the basic algorithm.

Before we move on to the part where we discuss how we can apply this algorithm to linear regression, let us discuss how we can deal with functions that have more than one variable, for example undefined . The approach is actually very similar. Instead of calculating the derivative with respect to undefined undefined we need to calculate the partial derivatives with respect to all variables, in our case undefined and undefined . For convenience we put the partial derivatives and the variables into their corresponding vectors.

undefined undefined

The gradient descent algorithm looks almost the same. The only difference is the substitution of scalars for vectors.

undefined

The vector that is represented by undefined (pronounced nabla) is called the gradient, giving its name to the gradient descent algorithm.