composer.algorithms.cutmix.cutmix#
Core CutMix classes and functions.
Functions
Create new samples using combinations of pairs of samples. |
Classes
CutMix trains the network on non-overlapping combinations of pairs of examples and interpolated targets rather than individual examples and targets. |
- class composer.algorithms.cutmix.cutmix.CutMix(num_classes, alpha=1.0, uniform_sampling=False)[source]#
Bases:
composer.core.algorithm.AlgorithmCutMix trains the network on non-overlapping combinations of pairs of examples and interpolated targets rather than individual examples and targets.
This is done by taking a non-overlapping combination of a given batch X with a randomly permuted copy of X. The area is drawn from a
Beta(alpha, alpha)distribution.Training in this fashion sometimes reduces generalization error.
- Parameters
num_classes (int) โ the number of classes in the task labels.
alpha (float, optional) โ the psuedocount for the Beta distribution used to sample area parameters. As
alphagrows, the two samples in each pair tend to be weighted more equally. Asalphaapproaches 0 from above, the combination approaches only using one element of the pair. Default:1.uniform_sampling (bool, optional) โ If
True, sample the bounding box such that each pixel has an equal probability of being mixed. IfFalse, defaults to the sampling used in the original paper implementation. Default:False.
Example
from composer.algorithms import CutMix algorithm = CutMix(num_classes=10, alpha=0.2) trainer = Trainer( model=model, train_dataloader=train_dataloader, eval_dataloader=eval_dataloader, max_duration="1ep", algorithms=[algorithm], optimizers=[optimizer] )
- composer.algorithms.cutmix.cutmix.cutmix_batch(input, target, num_classes, length=None, alpha=1.0, bbox=None, indices=None, uniform_sampling=False)[source]#
Create new samples using combinations of pairs of samples.
This is done by masking a region of each image in
inputand filling the masked region with the corresponding content from a random different image in``input``.The position of the masked region is determined by drawing a center point uniformly at random from all spatial positions.
The area of the masked region is computed using either
lengthoralpha. Iflengthis provided, it directly determines the size of the masked region. If it is not provided, the fraction of the input area to mask is drawn from aBeta(alpha, alpha)distribution. The original paper used a fixed value ofalpha = 1.Alternatively, one may provide a bounding box to mask directly, in which case
alphais ignored andlengthmust not be provided.The same masked region is used for the whole batch.
Note
The masked region is clipped at the spatial boundaries of the inputs. This means that there is no padding required, but the actual region used may be smaller than the nominal size computed using
lengthoralpha.- Parameters
input (Tensor) โ input tensor of shape
(N, C, H, W)target (Tensor) โ target tensor of either shape
Nor(N, num_classes). In the former case, elements oftargetmust be integer class ids in the range0..num_classes. In the latter case, rows oftargetmay be arbitrary vectors of targets, including, e.g., one-hot encoded class labels, smoothed class labels, or multi-output regression targets.num_classes (int) โ total number of classes or output variables
length (float, optional) โ Relative side length of the masked region. If specified,
lengthis interpreted as a fraction ofHandW, and the resulting box is of size(length * H, length * W). Default:None.alpha (float, optional) โ parameter for the Beta distribution over the fraction of the input to mask. Ignored if
lengthis provided. Default:1.bbox (tuple, optional) โ predetermined
(x1, y1, x2, y2)coordinates of the bounding box. Default:None.indices (Tensor, optional) โ Permutation of the samples to use. Default:
None.uniform_sampling (bool, optional) โ If
True, sample the bounding box such that each pixel has an equal probability of being mixed. IfFalse, defaults to the sampling used in the original paper implementation. Default:False.
- Returns
input_mixed (torch.Tensor) โ batch of inputs after cutmix has been applied.
target_mixed (torch.Tensor) โ soft labels for mixed input samples. These are a convex combination of the (possibly one-hot-encoded) labels from the original samples and the samples chosen to fill the masked regions, with the relative weighting equal to the fraction of the spatial size that is cut. E.g., if a sample was originally an image with label
0and 40% of the image of was replaced with data from an image with label2, the resulting labels, assuming only three classes, would be[1, 0, 0] * 0.6 + [0, 0, 1] * 0.4 = [0.6, 0, 0.4].
- Raises
ValueError โ If both
lengthandbboxare provided.
Example
import torch from composer.functional import cutmix_batch N, C, H, W = 2, 3, 4, 5 num_classes = 10 X = torch.randn(N, C, H, W) y = torch.randint(num_classes, size=(N,)) X_mixed, y_mixed = cutmix_batch( X, y, num_classes=num_classes, alpha=0.2)