Monitor gradient during training.
- class composer.callbacks.grad_monitor.GradMonitor(log_layer_grad_norms=False)#
Computes and logs the L2 norm of gradients on the
L2 norms are calculated after the reduction of gradients across GPUs. This function iterates over the parameters of the model and hence may cause a reduction in throughput while training large models. In order to ensure the correctness of norm, this function should be called after gradient unscaling in cases where gradients are scaled.
>>> from composer.callbacks import GradMonitor >>> # constructing trainer object with this callback >>> trainer = Trainer( ... model=model, ... train_dataloader=train_dataloader, ... eval_dataloader=eval_dataloader, ... optimizers=optimizer, ... max_duration="1ep", ... callbacks=[GradMonitor()], ... )
The L2 norms are logged by the
Loggerto the following keys as described below.
L2 norm of the gradients of all parameters in the model on the
Layer-wise L2 norms if
log_layer_grad_normsis True (default False)
log_layer_grad_norms (bool, optional) – Whether to log the L2 normalization of each layer. Defaults to False.