composer.functional.apply_squeeze_excite(model, latent_channels=64, min_channels=128, optimizers=None)[source]#

Adds Squeeze-and-Excitation blocks (Hu et al, 2019) after torch.nn.Conv2d layers.

A Squeeze-and-Excitation block applies global average pooling to the input, feeds the resulting vector to a single-hidden-layer fully-connected network (MLP), and uses the outputs of this MLP as attention coefficients to rescale the input. This allows the network to take into account global information about each input, as opposed to only local receptive fields like in a convolutional layer.

  • model (Module) โ€“ The module to apply squeeze excite replacement to.

  • latent_channels (float, optional) โ€“ Dimensionality of the hidden layer within the added MLP. If less than 1, interpreted as a fraction of the number of output channels in the torch.nn.Conv2d immediately preceding each Squeeze-and-Excitation block. Default: 64.

  • min_channels (int, optional) โ€“ An SE block is added after a torch.nn.Conv2d module conv only if one of the layerโ€™s input or output channels is greater than this threshold. Default: 128.

  • optimizers (Optimizer | Sequence[Optimizer], optional) โ€“

    Existing optimizer(s) bound to model.parameters(). All optimizers that have already been constructed with model.parameters() must be specified here so that they will optimize the correct parameters.

    If the optimizer(s) are constructed after calling this function, then it is safe to omit this parameter. These optimizers will see the correct model parameters.


The modified model


import composer.functional as cf
from torchvision import models
model = models.resnet50()