# composer.callbacks.early_stopper#

Early stopping callback.

Classes

 EarlyStopper Track a metric and halt training if it does not improve within a given interval.
class composer.callbacks.early_stopper.EarlyStopper(monitor, dataloader_label, comp=None, min_delta=0.0, patience=1)[source]#

Track a metric and halt training if it does not improve within a given interval.

Example: .. doctest:

>>> from composer import Evaluator, Trainer
>>> from composer.callbacks.early_stopper import EarlyStopper
>>> from torchmetrics.classification.accuracy import Accuracy
>>> # constructing trainer object with this callback
>>> early_stopper = EarlyStopper("Accuracy", "my_evaluator", patience=1)
>>> evaluator = Evaluator(
...     label = 'my_evaluator',
...     metrics = Accuracy()
... )
>>> trainer = Trainer(
...     model=model,
...     optimizers=optimizer,
...     max_duration="1ep",
...     callbacks=[early_stopper],
... )

Parameters
• monitor (str) – The name of the metric to monitor.

If monitor is in an Evaluator, the dataloader_label field should be set to the label of the Evaluator.
If monitor is a training metric or an ordinary evaluation metric not in an Evaluator, the dataloader_label should be set to the dataloader label, which defaults to 'train' or 'eval', respectively.
• comp (str | (Any, Any) -> Any, optional) – A comparison operator to measure change of the monitored metric. The comparison operator will be called comp(current_value, prev_best). For metrics where the optimal value is low (error, loss, perplexity), use a less than operator, and for metrics like accuracy where the optimal value is higher, use a greater than operator. Defaults to torch.less() if loss, error, or perplexity are substrings of the monitored metric, otherwise defaults to torch.greater().
• min_delta (float, optional) – An optional float that requires a new value to exceed the best value by at least that amount. Default: 0.0.