smooth_labels#

composer.functional.smooth_labels(logits, target, smoothing=0.1)[source]#

Shrink targets towards a uniform distribution as in Szegedy et al.

The smoothed labels are computed as (1 - smoothing) * targets + smoothing * unif where unif is a vector with elements all equal to 1 / num_classes.

Parameters
  • logits (Tensor) โ€“ predicted value for target, or any other tensor with the same shape. Shape must be (N, num_classes, ...) for N examples and num_classes classes with any number of optional extra dimensions.

  • target (Tensor) โ€“ target tensor of either shape N or (N, num_classes, ...). In the former case, elements of targets must be integer class ids in the range 0..num_classes. In the latter case, targets must have the same shape as logits.

  • smoothing (float, optional) โ€“ strength of the label smoothing, in \([0, 1]\). smoothing=0 means no label smoothing, and smoothing=1 means maximal smoothing (targets are ignored). Default: 0.1.

Returns

torch.Tensor โ€“ The smoothed targets.

Example

import torch

num_classes = 10
targets = torch.randint(num_classes, size=(100,))
from composer.algorithms.label_smoothing import smooth_labels
new_targets = smooth_labels(logits=logits,
                            target=targets,
                            smoothing=0.1)