Cross-validation
Last updated
Last updated
Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test. Here is a flowchart of typical cross-validation workflow in model training. The best parameters can be determined by grid search techniques.
If we use a random train-test split into 70%-30% or 80%-20%, there is a possibility of high bias if we have limited data. If our data is huge and our test sample and train sample has the same distribution then this approach is acceptable.
If that's not the case, when evaluating different hyperparameters, there is still a risk of overfitting on the test set because the parameters can be tweaked until the estimator performs optimally. This way, knowledge about the test set can be “leaked” into the model, and evaluation metrics no longer report on generalization performance.
To solve this problem, yet another part of the dataset can be held out as a so-called “validation set”: training proceeds on the training set, after which evaluation is done on the validation set, and when the experiment seems to be successful, final evaluation can be done on the test set.
However, by partitioning the available data into three sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets.
By applying cross-validation, a test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
a model is trained using k−1 of the folds as training data
the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy)
the performance measure reported by k-fold cross-validation is then the average of the values computed in the loop.
Example of parameter estimation using grid search with CV
Similar to K-fold cross-validation, Stratified K-fold returns stratified folds. This is: while making the folds it maintains the percentage of samples for each class in every fold. So that model gets equally distributed data for training/validation folds.
It repeats K-Fold n times. It can be used when one requires to run K-fold n times, producing different splits in each repetition.
In LOOCV each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for samples, we have different training sets and different tests set.
This cross-validation procedure does not waste much data as only one sample is removed from the training set. Thus, it's appropriate when you have a small dataset or when an accurate estimate of model performance is more important than the computational cost of the method.
Don’t Use LOOCV: Large datasets or costly models to fit.
Use LOOCV: Small datasets or when estimated model performance is critical.
Related: Leave P Out (LPOCV)