Optimization
Last updated
Last updated
Given W the weights matrix, we need to find out the best solution that minimizes the loss function and optimizes the success metric or metrics.
Randomly checking N random weight matrixes and selecting the one that behaves the best in the training set.
In multiple dimensions, the gradient is the vector of partial derivatives along each dimension. The slope in any direction is the dot product of the direction with the gradient. The direction of the steepest descent is the negative gradient.
In practice, we will follow the analytic gradient (Newton-Raphson method).
Step size updates the weights in the opposite direction to the gradient. Since the gradient points to the direction of the greatest increase of the function - step_size
helps us to slowly move in the direction of the greatest decrease. This variable is a hyperparameter of the algorithm, also called the learning rate.
We want the Slope to be closer to 0, to minimize it. To compute the partial derivative of the loss w.r.t. , we can apply the chain rule:
To go more in detail about how GD works, we can observe this pseudo code:
In stochastic gradient descent, rather than calculating the error as an average of all the training examples, we select M random training examples from the entire dataset and use that in our cost function. We call this subset a mini-batch.
Once our network has been trained on all the data points in our mini-batch, we select a new subset of random points and train our model with that. We continue this process until we’ve exhausted all training points, at which point we’ve completed an epoch. We then start with a new epoch and continue until convergence.
Main differences w.r.t. the "standard" Gradient Descent:
Partial derivates are computed on every training sample
Meaning, model weights, and bias are updated after each sample
The step size (alpha) is smaller to avoid big changes between samples
There's a variant called Minibatch GD:
Mini batches are used (with sizes)
Parameters are updated after each mini-batch
This solution is less noisy than SGD
But converges faster than Vanilla GD
Better GPU usage
TBA
TBA