Core AugMix classes and functions.
Applies AugMix (Hendrycks et al, 2020) data augmentation to a single image or batch of images.
AugMix (Hendrycks et al, 2020) creates
- class composer.algorithms.augmix.augmix.AugMix(severity=3, depth=- 1, width=3, alpha=1.0, augmentation_set='all')#
AugMix (Hendrycks et al, 2020) creates
depthimage augmentations, applies each sequence with random intensity, and returns a convex combination of the
widthaugmented images and the original image. The coefficients for mixing the augmented images are drawn from a uniform
Dirichlet(alpha, alpha, ...)distribution. The coefficient for mixing the combined augmented image and the original image is drawn from a
Beta(alpha, alpha)distribution, using the same
This algorithm runs on on
FIT_STARTto insert a dataset transformation. It is a no-op if this algorithm already applied itself on the
See the Method Card for more details.
from composer.algorithms import AugMix from composer.trainer import Trainer augmix_algorithm = AugMix( severity=3, width=3, depth=-1, alpha=1.0, augmentation_set="all" ) trainer = Trainer( model=model, train_dataloader=train_dataloader, eval_dataloader=eval_dataloader, max_duration="1ep", algorithms=[augmix_algorithm], optimizers=[optimizer] )
severity (int, optional) – Severity of augmentations; ranges from 0 (no augmentation) to 10 (most severe). Default:
depth (int, optional) – Number of augmentations per sequence. -1 enables stochastic depth sampled uniformly from [1, 3]. Default:
width (int, optional) – Number of augmentation sequences. Default:
alpha (float, optional) – Pseudocount for Beta and Dirichlet distributions. Must be > 0. Higher values yield mixing coefficients closer to uniform weighting. As the value approaches 0, the mixing coefficients approach using only one version of each image. Default:
augmentation_set (str, optional) –
Must be one of the following options as also described in
Uses all augmentations from the paper.
"all", but excludes transforms that are part of the ImageNet-C/CIFAR10-C test sets.
"all", but some of the implementations are identical to the original Github repository, which contains implementation specificities for the augmentations
"brightness". The original implementations have an intensity sampling scheme that samples a value bounded by 0.118 at a minimum, and a maximum value of \(intensity \times 0.18 + .1\), which ranges from 0.28 (intensity = 1) to 1.9 (intensity 10). These augmentations have different effects depending on whether they are < 0 or > 0 (or < 1 or > 1). “all” uses implementations of “color”, “contrast”, “sharpness”, and “brightness” that account for diverging effects around 0 (or 1).
- class composer.algorithms.augmix.augmix.AugmentAndMixTransform(severity=3, depth=- 1, width=3, alpha=1.0, augmentation_set='all')#
import torchvision.transforms as transforms from composer.algorithms.augmix import AugmentAndMixTransform augmix_transform = AugmentAndMixTransform( severity=3, width=3, depth=-1, alpha=1.0, augmentation_set="all" ) composed = transforms.Compose([ augmix_transform, transforms.RandomHorizontalFlip() ]) transformed_image = composed(image)
- composer.algorithms.augmix.augmix.augmix_image(img, severity=3, depth=-1, width=3, alpha=1.0, augmentation_set=[<function autocontrast>, <function equalize>, <function posterize>, <function rotate>, <function solarize>, <function shear_x>, <function shear_y>, <function translate_x>, <function translate_y>, <function color>, <function contrast>, <function brightness>, <function sharpness>])#
Applies AugMix (Hendrycks et al, 2020) data augmentation to a single image or batch of images. See
AugMixand the Method Card for details. This function only acts on a single image (or batch) per call and is unlikely to be used in a training loop. Use
AugmentAndMixTransformto use AugMix as part of a
import composer.functional as cf from composer.algorithms.utils import augmentation_sets augmixed_image = cf.augmix_image( img=image, severity=3, width=3, depth=-1, alpha=1.0, augmentation_set=augmentation_sets["all"] )
PIL.Image – AugMix’d image.