- class composer.callbacks.EarlyStopper(monitor, dataloader_label, comp=None, min_delta=0.0, patience=1)#
Track a metric and halt training if it does not improve within a given interval.
Example: .. doctest:
>>> from composer import Evaluator, Trainer >>> from composer.callbacks.early_stopper import EarlyStopper >>> # constructing trainer object with this callback >>> early_stopper = EarlyStopper('MulticlassAccuracy', 'my_evaluator', patience=1) >>> evaluator = Evaluator( ... dataloader = eval_dataloader, ... label = 'my_evaluator', ... metric_names = ['MulticlassAccuracy'] ... ) >>> trainer = Trainer( ... model=model, ... train_dataloader=train_dataloader, ... eval_dataloader=evaluator, ... optimizers=optimizer, ... max_duration="1ep", ... callbacks=[early_stopper], ... )
monitor (str) – The name of the metric to monitor.
dataloader_label (str) –
The label of the dataloader or evaluator associated with the tracked metric.
If monitor is a training metric or an ordinary evaluation metric not in an
dataloader_labelshould be set to the dataloader label, which defaults to
comp (str | (Any, Any) -> Any, optional) – A comparison operator to measure change of the monitored metric. The comparison operator will be called
comp(current_value, prev_best). For metrics where the optimal value is low (error, loss, perplexity), use a less than operator, and for metrics like accuracy where the optimal value is higher, use a greater than operator. Defaults to
torch.less()if loss, error, or perplexity are substrings of the monitored metric, otherwise defaults to
min_delta (float, optional) – An optional float that requires a new value to exceed the best value by at least that amount. Default:
patience (Time | int | str, optional) – The interval of time the monitored metric can not improve without stopping training. Default: 1 epoch. If patience is an integer, it is interpreted as the number of epochs.