In machine learning, early stopping is a form of regularization used when a machine learning model (such as a neural network) is trained by on-line gradient descent. In early stopping, the training set is split into a new training set and a validation set. Gradient descent is applied to the new training set. After each sweep through the new training set, the network is evaluated on the validation set. When the performance with the validation test stops improving, the algorithm halts. The network with the best performance on the validation set is then used for actual testing, with a separate set of data (the validation set is used in learning to decide when to stop).
This technique is a simple but efficient hack to deal with the problem of overfitting. Overfitting is a phenomenon in which a learning system, such as a neural network gets very good at dealing with one data set at the expense of becoming very bad at dealing with other data sets. Early stopping is effectively limiting the used weights in the network and thus imposes a regularization, effectively lowering the VC dimension.
Early stopping is a very common practice in neural network training and often produces networks that generalize well. However, while often improving the generalization it does not do so in a mathematically well-defined way.
Read more about Early Stopping: Method, Advantages, Issues
Famous quotes containing the words early and/or stopping:
“It is easy to see that, even in the freedom of early youth, an American girl never quite loses control of herself; she enjoys all permitted pleasures without losing her head about any of them, and her reason never lets the reins go, though it may often seem to let them flap.”
—Alexis de Tocqueville (18051859)
“Is whispering nothing?
Is leaning cheek to cheek? Is meeting noses?
Kissing with inside lip? Stopping the career
Of laughter with a sigh?a note infallible
Of breaking honesty.”
—William Shakespeare (15641616)