class composer.algorithms.SqueezeExcite2d(num_features, latent_channels=0.125)[source]#

Squeeze-and-Excitation block from (Hu et al, 2019)

This block applies global average pooling to the input, feeds the resulting vector to a single-hidden-layer fully-connected network (MLP), and uses the outputs of this MLP as attention coefficients to rescale the input. This allows the network to take into account global information about each input, as opposed to only local receptive fields like in a convolutional layer.

  • num_features (int) โ€“ Number of features or channels in the input.

  • latent_channels (float, optional) โ€“ Dimensionality of the hidden layer within the added MLP. If less than 1, interpreted as a fraction of num_features. Default: 0.125.