composer.algorithms#
Modules
composer.algorithms.algorithm_hparams |
|
composer.algorithms.algorithm_registry |
|
ALiBi (Attention with Linear Biases; Press et al, 2021) dispenses with position embeddings for tokens in transformer-based NLP models, instead encoding position information by biasing the query-key attention scores proportionally to each token pair's distance. |
|
AugMix (Hendrycks et al, 2020) creates multiple independent realizations of sequences of image augmentations, applies each sequence with random intensity, and returns a convex combination of the augmented images and the original image. |
|
BlurPool adds anti-aliasing filters to convolutional layers to increase accuracy and invariance to small shifts in the input. |
|
composer.algorithms.channels_last |
|
Drops a fraction of the rows and columns of an input image. |
|
CutMix trains the network on non-overlapping combinations of pairs of examples and iterpolated targets rather than individual examples and targets. |
|
Cutout is a data augmentation technique that works by masking out one or more square regions of an input image. |
|
Decomposes linear operators into pairs of smaller linear operators. |
|
Replaces batch normalization modules with Ghost Batch Normalization modules that simulate the effect of using a smaller batch size. |
|
composer.algorithms.hparams |
|
Shrinks targets towards a uniform distribution to counteract label noise. |
|
Progressively freeze the layers of the network during training, starting with the earlier layers. |
|
Create new samples using convex combinations of pairs of samples. |
|
composer.algorithms.no_op_model |
|
Apply Fastai's progressive resizing data augmentation to speed up training. |
|
Randomly applies a sequence of image data augmentations (Cubuk et al, 2019) to an image. |
|
composer.algorithms.sam |
|
composer.algorithms.scale_schedule |
|
composer.algorithms.selective_backprop |
|
Sequence length warmup progressively increases the sequence length during training of NLP models. |
|
Adds Squeeze-and-Excitation blocks (Hu et al, 2019) after the |
|
Implements stochastic depth (Huang et al, 2016) for ResNet blocks. |
|
Stochastic Weight Averaging (SWA; Izmailov et al, 2018) averages model weights sampled at different times near the end of training. |
|
Helper utilities for algorithms. |
|
composer.algorithms.warnings |
Efficiency methods for training.
Examples include smoothing the labels
and adding Squeeze-and-Excitation
blocks,
among many others.
Algorithms are implemented in both a standalone functional form (see composer.functional
)
and as subclasses of Algorithm
for integration in the Composer Trainer
.
The former are easier to integrate piecemeal into an existing codebase.
The latter are easier to compose together, since they all have the same public interface
and work automatically with the Composer Trainer
.
For ease of composability, algorithms in our Trainer are based on the two-way callbacks concept from Howard et al, 2020. Each algorithm implements two methods:
Algorithm.match()
: returnsTrue
if the algorithm should be run given the currentState
andEvent
.Algorithm.apply()
: performs an in-place modification of the givenState
For example, a simple algorithm that shortens training:
from composer import Algorithm, State, Event, Logger
class ShortenTraining(Algorithm):
def match(self, state: State, event: Event, logger: Logger) -> bool:
return event == Event.INIT
def apply(self, state: State, event: Event, logger: Logger):
state.max_duration /= 2 # cut training time in half
For more information about events, see Event
.
Functions
|
composer.algorithms.algorithm_registry.get_algorithm_registry |
|
composer.algorithms.algorithm_registry.list_algorithms |
Classes
ALiBi (Attention with Linear Biases; Press et al, 2021) dispenses with position embeddings and instead directly biases attention matrices such that nearby tokens attend to one another more strongly. |
|
AugMix (Hendrycks et al, 2020) creates |
|
Wrapper module for |
|
BlurPool adds anti-aliasing filters to convolutional layers to increase accuracy and invariance to small shifts in the input. |
|
Changes the memory format of the model to |
|
Drops a fraction of the rows and columns of an input image. |
|
Torchvision-like transform for performing the ColOut augmentation, where random rows and columns are dropped from a single image. |
|
CutMix trains the network on non-overlapping combinations of pairs of examples and interpolated targets rather than individual examples and targets. |
|
Cutout is a data augmentation technique that works by masking out one or more square regions of an input image. |
|
Decomposes linear operators into pairs of smaller linear operators. |
|
Replaces batch normalization modules with Ghost Batch Normalization modules that simulate the effect of using a smaller batch size. |
|
Shrink targets towards a uniform distribution as in Szegedy et al. |
|
Progressively freeze the layers of the network during training, starting with the earlier layers. |
|
MixUp trains the network on convex combinations of pairs of examples and targets rather than individual examples and targets. |
|
|
composer.algorithms.no_op_model.no_op_model.NoOpModel |
Apply Fastai's progressive resizing data augmentation to speed up training. |
|
Randomly applies a sequence of image data augmentations (Cubuk et al, 2019) to an image. |
|
Wraps |
|
Adds sharpness-aware minimization (Foret et al, 2020) by wrapping an existing optimizer with a |
|
Apply Stochastic Weight Averaging (Izmailov et al, 2018) |
|
Deprecated - do not use. |
|
Selectively backpropagate gradients from a subset of each batch (Jiang et al, 2019). |
|
Progressively increases the sequence length during training. |
|
Adds Squeeze-and-Excitation blocks (Hu et al, 2019) after the |
|
Squeeze-and-Excitation block from (Hu et al, 2019) |
|
Helper class used to add a |
|
Applies Stochastic Depth (Huang et al, 2016) to the specified model. |
Hparams
These classes are used with yahp
for YAML
-based configuration.
Hyperparameters for algorithms. |
|
See |
|
See |
|
See |
|
ChannelsLast has no hyperparameters, so this class has no member variables. |
|
See |
|
See |
|
See |
|
See |
|
See |
|
See |
|
See |
|
See |
|
composer.algorithms.hparams.NoOpModelHparams |
|
See |
|
See |
|
See |
|
See |
|
See |
|
See |
|
composer.algorithms.hparams.SeqLengthWarmupHparams |
|
See |
|
See |
Methods
load_multiple()
load()