Factorize#
- class composer.algorithms.Factorize(factorize_convs=True, factorize_linears=True, min_channels=256, latent_channels=0.25, min_features=256, latent_features=128)[source]#
Decomposes linear operators into pairs of smaller linear operators.
Specifically, this algorithm replaces
torch.nn.Conv2dandtorch.nn.Linearmodules withFactorizedConv2dandFactorizedLinearmodules.The replacement is only performed if doing so would reduce the number of multiply-adds used to compute each moduleโs output. For linear layers and pointwise convolutions, this means that the factorization must use an intermediate rank of less than half the input and output ranks, since it must perform two operations instead of one.
For convolutions with kernel sizes greater than 1, the threshold for factorization being worthwhile varies with kernel size. Larger kernels allow larger intermediate ranks.
See
factorize_matrix()andfactorize_conv2d()for more information about the factorization process. SeeFactorizedConv2dandFactorizedLinearfor more information about the factorized modules used to replace the original modules.Runs on
Event.INIT.- Parameters
factorize_convs (bool) โ whether to try factorizing
torch.nn.Conv2dmodules. Default:True.factorize_linears (bool) โ whether to try factorizing
torch.nn.Linearmodules. Default:True.min_channels (int) โ if a
torch.nn.Conv2dmodule does not have at least this many input and output channels, it will be ignored. Modules with few channels are unlikely to be accelerated by factorization due to poor hardware utilization. Default:256.latent_channels (int, float) โ number of latent channels to use in factorized convolutions. Can be specified as either an integer > 1 or as a float within
[0, 1). In the latter case, the value is interpreted as a fraction ofmin(in_channels, out_channels)for eachtorch.nn.Conv2dmodule, and is converted to the equivalent integer value, with a minimum of 1. Default:0.25.min_features (int) โ if a
torch.nn.Linearmodule does not have at least this many input and output features, it will be ignored. Modules with few features are unlikely to be accelerated by factorization due to poor hardware utilization. Default:256.latent_features (int, float) โ size of the latent space for factorized linear modules. Can be specified as either an integer > 1 or as a float within
[0, 0.5). In the latter case, the value is interpreted as a fraction ofmin(in_features, out_features)for eachtorch.nn.Linearmodule and is converted to the equivalent integer value, with a minimum of 1. Default:128.