composer.core.event#
Events represent specific points in the training loop where an Algorithm
and Callback
can run.
Classes
Enum to represent events in the training loop. |
- class composer.core.event.Event(value)[source]#
Bases:
composer.utils.string_enum.StringEnum
Enum to represent events in the training loop.
The following pseudocode shows where each event fires in the training loop:
# <INIT> # <FIT_START> for epoch in range(NUM_EPOCHS): # <EPOCH_START> for inputs, targets in dataloader: # <AFTER_DATALOADER> # <BATCH_START> # <BEFORE_FORWARD> outputs = model.forward(inputs) # <AFTER_FORWARD> # <BEFORE_LOSS> loss = model.loss(outputs, targets) # <AFTER_LOSS> # <BEFORE_BACKWARD> loss.backward() # <AFTER_BACKWARD> optimizer.step() # <BATCH_END> if should_eval(batch=True): # <EVAL_START> # <EVAL_BATCH_START> # <EVAL_BEFORE_FORWARD> # <EVAL_AFTER_FORWARD> # <EVAL_BATCH_END> # <EVAL_END> # <BATCH_CHECKPOINT> # <EPOCH_END> if should_eval(batch=False): # <EVAL_START> # <EVAL_BATCH_START> # <EVAL_BEFORE_FORWARD> # <EVAL_AFTER_FORWARD> # <EVAL_BATCH_END> # <EVAL_END> # <EPOCH_CHECKPOINT>
- INIT#
Invoked in the constructor of
Trainer
. Model surgery (seemodule_surgery
) typically occurs here.
- FIT_START#
Invoked at the beginning of each call to
Trainer.fit()
. Dataset transformations typically occur here.
- EPOCH_START#
Start of an epoch.
- BATCH_START#
Start of a batch.
- AFTER_DATALOADER#
Immediately after the dataloader is called. Typically used for on-GPU dataloader transforms.
- BEFORE_TRAIN_BATCH#
Before the forward-loss-backward computation for a training batch. When using gradient accumulation, this is still called only once.
- BEFORE_FORWARD#
Before the call to
model.forward()
.
- AFTER_FORWARD#
After the call to
model.forward()
.
- BEFORE_LOSS#
Before the call to
model.loss()
.
- AFTER_LOSS#
After the call to
model.loss()
.
- BEFORE_BACKWARD#
Before the call to
loss.backward()
.
- AFTER_BACKWARD#
After the call to
loss.backward()
.
- AFTER_TRAIN_BATCH#
After the forward-loss-backward computation for a training batch. When using gradient accumulation, this event still fires only once.
- BATCH_END#
End of a batch, which occurs after the optimizer step and any gradient scaling.
- BATCH_CHECKPOINT#
After
Event.BATCH_END
and any batch-wise evaluation. Saving checkpoints at this event allows the checkpoint saver to use the results from any batch-wise evaluation to determine whether a checkpoint should be saved.
- EPOCH_END#
End of an epoch.
- EPOCH_CHECKPOINT#
After
Event.EPOCH_END
and any epoch-wise evaluation. Saving checkpoints at this event allows event allows the checkpoint saver to use the results from any epoch-wise evaluation to determine whether a checkpointshould be saved.
- EVAL_START#
Start of evaluation through the validation dataset.
- EVAL_BATCH_START#
Before the call to
model.validate(batch)
- EVAL_BEFORE_FORWARD#
Before the call to
model.validate(batch)
- EVAL_AFTER_FORWARD#
After the call to
model.validate(batch)
- EVAL_BATCH_END#
After the call to
model.validate(batch)
- EVAL_END#
End of evaluation through the validation dataset.
- property canonical_name#
The name of the event, without before/after markers.
Events that have a corresponding โbeforeโ or โafterโ event share the same canonical name.
Example
>>> Event.EPOCH_START.canonical_name 'epoch' >>> Event.EPOCH_END.canonical_name 'epoch'
- Returns
str โ The canonical name of the event.
- property is_after_event#
Whether the event is an โafter_*โ event (e.g.,
AFTER_LOSS
) and has a corresponding โbefore_*โ (.e.g.,BEFORE_LOSS
).
- property is_before_event#
Whether the event is a โbefore_*โ event (e.g.,
BEFORE_LOSS
) and has a corresponding โafter_*โ (.e.g.,AFTER_LOSS
).