composer.algorithms#
Efficiency methods for training.
Examples include LabelSmoothing
and adding SqueezeExcite
blocks,
among many others.
Algorithms are implemented in both a standalone functional form (see composer.functional
)
and as subclasses of Algorithm
for integration in the Composer Trainer
.
The former are easier to integrate piecemeal into an existing codebase.
The latter are easier to compose together, since they all have the same public interface
and work automatically with the Composer Trainer
.
For ease of composability, algorithms in our Trainer are based on the two-way callbacks concept from Howard et al, 2020. Each algorithm implements two methods:
Algorithm.match()
: returnsTrue
if the algorithm should be run given the currentState
andEvent
.Algorithm.apply()
: performs an in-place modification of the givenState
For example, a simple algorithm that shortens training:
from composer import Algorithm, State, Event, Logger
class ShortenTraining(Algorithm):
def match(self, state: State, event: Event, logger: Logger) -> bool:
return event == Event.INIT
def apply(self, state: State, event: Event, logger: Logger):
state.max_duration /= 2 # cut training time in half
For more information about events, see Event
.
Classes
ALiBi (Attention with Linear Biases; Press et al, 2021) dispenses with position embeddings and instead directly biases attention matrices such that nearby tokens attend to one another more strongly. |
|
The AugMix data augmentation technique. |
|
Wrapper module for |
|
BlurPool adds anti-aliasing filters to convolutional layers. |
|
Changes the memory format of the model to torch.channels_last. |
|
Drops a fraction of the rows and columns of an input image and (optionally) a target image. |
|
Torchvision-like transform for performing the ColOut augmentation, where random rows and columns are dropped from up to two Torch tensors or two PIL images. |
|
CutMix trains the network on non-overlapping combinations of pairs of examples and interpolated targets rather than individual examples and targets. |
|
CutOut is a data augmentation technique that works by masking out one or more square regions of an input image. |
|
Maintains a set of weights that follow the exponential moving average of the training model weights. |
|
Decomposes linear operators into pairs of smaller linear operators. |
|
Replaces all instances of Linear layers in the feed-forward subnetwork with a Gated Linear Unit. |
|
Replaces batch normalization modules with Ghost Batch Normalization modules that simulate the effect of using a smaller batch size. |
|
Clips all gradients in model based on specified clipping_type. |
|
Replaces all instances of torch.nn.Dropout with a GyroDropout. |
|
Shrink targets towards a uniform distribution as in Szegedy et al. |
|
Progressively freeze the layers of the network during training, starting with the earlier layers. |
|
Replaces all instances of |
|
Replaces all instances of |
|
MixUp trains the network on convex batch combinations. |
|
Runs on |
|
Resize inputs and optionally outputs by cropping or interpolating. |
|
Randomly applies a sequence of image data augmentations to an image. |
|
Wraps |
|
Adds sharpness-aware minimization (Foret et al, 2020) by wrapping an existing optimizer with a |
|
Applies Stochastic Weight Averaging (Izmailov et al, 2018). |
|
Selectively backpropagate gradients from a subset of each batch. |
|
Progressively increases the sequence length during training. |
|
Adds Squeeze-and-Excitation blocks (Hu et al, 2019) after the |
|
Squeeze-and-Excitation block from (Hu et al, 2019) |
|
Helper class used to add a |
|
Applies Stochastic Depth (Huang et al, 2016) to the specified model. |
|
Weight Standardization standardizes convolutional weights in a model. |