Optimization
Last updated
Last updated
Given W the weights matrix, we need to find out the best solution that minimizes the loss function and optimizes the success metric or metrics.
Randomly checking N random weight matrixes and selecting the one that behaves the best in the training set.
In multiple dimensions, the gradient is the vector of partial derivatives along each dimension. The slope in any direction is the dot product of the direction with the gradient. The direction of the steepest descent is the negative gradient.
In practice, we will follow the analytic gradient (Newton-Raphson method).
Step size updates the weights in the opposite direction to the gradient. Since the gradient points to the direction of the greatest increase of the function - step_size
helps us to slowly move in the direction of the greatest decrease. This variable is a hyperparameter of the algorithm, also called the learning rate.
In stochastic gradient descent, rather than calculating the error as an average of all the training examples, we select M random training examples from the entire dataset and use that in our cost function. We call this subset a mini-batch.
Once our network has been trained on all the data points in our mini-batch, we select a new subset of random points and train our model with that. We continue this process until we’ve exhausted all training points, at which point we’ve completed an epoch. We then start with a new epoch and continue until convergence.
TBA
TBA