apply_factorization#

composer.functional.apply_factorization(model, factorize_convs=True, factorize_linears=True, min_channels=512, latent_channels=0.25, min_features=512, latent_features=0.25, optimizers=None)[source]#

Replaces torch.nn.Linear and torch.nn.Conv2d modules with FactorizedLinear and FactorizedConv2d modules.

Factorized modules replace one full-rank operation with a sequence of two lower-rank operations. When the rank is low enough, this can save computation, at the cost of expressive power. See Factorize for details.

Parameters
  • model (Module) โ€“ the model to modify in-place.

  • factorize_convs (bool, optional) โ€“ whether to try factorizing torch.nn.Conv2d modules. Default: True.

  • factorize_linears (bool, optional) โ€“ whether to try factorizing torch.nn.Linear modules. Default: True.

  • min_channels (int, optional) โ€“ if a torch.nn.Conv2d module does not have at least this many input and output channels, it will be ignored. Modules with few channels are unlikely to be accelerated by factorization due to poor hardware utilization. Default: 512.

  • latent_channels (int | float, optional) โ€“ number of latent channels to use in factorized convolutions. Can be specified as either an integer > 1 or as a float within [0, 1). In the latter case, the value is interpreted as a fraction of min(in_channels, out_channels) for each torch.nn.Conv2d module, and is converted to the equivalent integer value, with a minimum of 1. Default: 0.25.

  • min_features (int, optional) โ€“ if a torch.nn.Linear module does not have at least this many input and output features, it will be ignored. Modules with few features are unlikely to be accelerated by factorization due to poor hardware utilization. Default: 512.

  • latent_features (int | float, optional) โ€“ size of the latent space for factorized linear modules. Can be specified as either an integer > 1 or as a float within [0, 0.5). In the latter case, the value is interpreted as a fraction of min(in_features, out_features) for each torch.nn.Linear module, and is converted to the equivalent integer value, with a minimum of 1. Default: 0.25.

  • optimizers (Optimizer | Sequence[Optimizer], optional) โ€“

    Existing optimizers bound to model.parameters(). All optimizers that have already been constructed with model.parameters() must be specified here so that they will optimize the correct parameters.

    If the optimizer(s) are constructed after calling this function, then it is safe to omit this parameter. These optimizers will see the correct model parameters.

Example

import composer.functional as cf
from torchvision import models
model = models.resnet50()
cf.apply_factorization(model)