- composer.functional.apply_squeeze_excite(model, latent_channels=64, min_channels=128, optimizers=None)#
A Squeeze-and-Excitation block applies global average pooling to the input, feeds the resulting vector to a single-hidden-layer fully-connected network (MLP), and uses the outputs of this MLP as attention coefficients to rescale the input. This allows the network to take into account global information about each input, as opposed to only local receptive fields like in a convolutional layer.
model (Module) – The module to apply squeeze excite replacement to.
latent_channels (float, optional) – Dimensionality of the hidden layer within the added MLP. If less than 1, interpreted as a fraction of the number of output channels in the
torch.nn.Conv2dimmediately preceding each Squeeze-and-Excitation block. Default:
Existing optimizer(s) bound to
model.parameters(). All optimizers that have already been constructed with
model.parameters()must be specified here so that they will optimize the correct parameters.
If the optimizer(s) are constructed after calling this function, then it is safe to omit this parameter. These optimizers will see the correct model parameters.
The modified model
import composer.functional as cf from torchvision import models model = models.resnet50() cf.apply_stochastic_depth( model, target_layer_name='ResNetBottleneck' )