select_using_loss#

composer.functional.select_using_loss(input, target, model, loss_fun, keep=0.5, scale_factor=1)[source]#

Prunes minibatches as a subroutine of SelectiveBackprop. Computes the loss function on the provided training examples and runs minibatches according to the difficulty. The fraction of the minibatch that is kept for gradient computation is specified by the argument 0 <= keep <= 1.

To speed up SBโ€™s selection forward pass, the argument scale_factor can be used to spatially downsample input tensors. The full-sized inputs will still be used for the weight gradient computation.

Parameters
  • input (Tensor) โ€“ Input tensor to prune.

  • target (Tensor) โ€“ Output tensor to prune.

  • model (Callable) โ€“ Model with which to predict outputs.

  • loss_fun (Callable) โ€“ Loss function of the form loss(outputs, targets, reduction='none'). The function must take the keyword argument reduction='none' to ensure that per-sample losses are returned.

  • keep (float, optional) โ€“ Fraction of examples in the batch to keep. Default: 0.5.

  • scale_factor (float, optional) โ€“ Multiplier between 0 and 1 for spatial size. Downsampling requires the input tensor to be at least 3D. Default: 1.

Returns

(torch.Tensor, torch.Tensor) โ€“ The pruned batch of inputs and targets

Raises
  • ValueError โ€“ If scale_factor > 1.

  • TypeError โ€“ If loss_fun > 1 has the wrong signature or is not callable.

Note

This function runs an extra forward pass through the model on the batch of data. If you are using a non-default precision, ensure that this forward pass runs in your desired precision. For example:

>>> import torch
>>> from composer.algorithms.selective_backprop import select_using_loss
>>> with torch.cuda.amp.autocast(True):
...     X_new, y_new = select_using_loss(
...         X_sb,
...         y_sb,
...         lin_model,
...         loss_fun,
...         keep=0.5,
...         scale_factor=1
...     )