# composer.algorithms.label_smoothing.label_smoothing#

Core Label Smoothing classes and functions.

Functions

 smooth_labels Shrink targets towards a uniform distribution as in Szegedy et al.

Classes

 LabelSmoothing Shrink targets towards a uniform distribution as in Szegedy et al.
class composer.algorithms.label_smoothing.label_smoothing.LabelSmoothing(smoothing=0.1, target_key=1)[source]#

Shrink targets towards a uniform distribution as in Szegedy et al.

The smoothed labels are computed as (1 - smoothing) * targets + smoothing * unif where unif is a vector with elements all equal to 1 / num_classes.

Parameters
• smoothing – Strength of the label smoothing, in $$[0, 1]$$. smoothing=0 means no label smoothing, and smoothing=1 means maximal smoothing (targets are ignored). Default: 0.1.

• target_key (str | int | Tuple[Callable, Callable] | Any, optional) – A key that indexes to the target from the batch. Can also be a pair of get and set functions, where the getter is assumed to be first in the pair. The default is 1, which corresponds to any sequence, where the second element is the target. Default: 1.

Example

from composer.algorithms import LabelSmoothing
algorithm = LabelSmoothing(smoothing=0.1)
trainer = Trainer(
model=model,
max_duration="1ep",
algorithms=[algorithm],
optimizers=[optimizer]
)

composer.algorithms.label_smoothing.label_smoothing.smooth_labels(logits, target, smoothing=0.1)[source]#

Shrink targets towards a uniform distribution as in Szegedy et al.

The smoothed labels are computed as (1 - smoothing) * targets + smoothing * unif where unif is a vector with elements all equal to 1 / num_classes.

Parameters
• logits (Tensor) – predicted value for target, or any other tensor with the same shape. Shape must be (N, num_classes, ...) for N examples and num_classes classes with any number of optional extra dimensions.

• target (Tensor) – target tensor of either shape N or (N, num_classes, ...). In the former case, elements of targets must be integer class ids in the range 0..num_classes. In the latter case, targets must have the same shape as logits.

• smoothing (float, optional) – strength of the label smoothing, in $$[0, 1]$$. smoothing=0 means no label smoothing, and smoothing=1 means maximal smoothing (targets are ignored). Default: 0.1.

Returns

torch.Tensor – The smoothed targets.

Example

import torch

num_classes = 10
targets = torch.randint(num_classes, size=(100,))
from composer.algorithms.label_smoothing import smooth_labels
new_targets = smooth_labels(logits=logits,
target=targets,
smoothing=0.1)