composer.algorithms.factorize.factorize_core#
composer.algorithms.factorize.factorize_core
Functions
Approximates a KxK convolution by factorizing it into a KxK convolution with fewer channels followed by a 1x1 convolution. |
|
Approximates a matrix by factorizing it into a product of two smaller matrices. |
Classes
Bundles tensors used by a factorized linear operator. |
Attributes
OptionalTupleUnion
- class composer.algorithms.factorize.factorize_core.LowRankSolution(Wa=None, Wb=None, bias=None, rank=- 1, nmse=0)[source]#
Bundles tensors used by a factorized linear operator.
The factorization always splits the operator into two smaller linear operators. The first takes in input of the original shape and embeds it in a lower-dimensional space. The second maps this lower-dimensional space to the original output space.
- Parameters
Wa โ First linear operation in the factorized approximation. For a factorized linear operation,
Wais a matrix. For a factorized convolution,Wamatches the shape of the convolutionโs original weight parameter, except along the channel axis.Wb โ Second linear operation in the factorized approximation. Shape is such that composing
WbwithWbyields an output of the same size as the original operation.bias โ vector added to the output of the second linear operation
rank โ output dimensionality (channels or features) of the first linear operation, and input dimensionality of the second input operation.
nmse โ normalized mean squared error obtained during the optimization procedure used to derive
Wa,Wb, andbias. This is equal to the raw mean squared error between the factorized approximationโs output and the original output, divided by the variance of the original output. A value of 0 means no error was introduced, and a value of 1 corresponds to capturing the output no better than chance.
- composer.algorithms.factorize.factorize_core.factorize_conv2d(X, Wa, Wb=None, rank=0.25, biasA=None, biasB=None, n_iters=3, **conv2d_kwargs)[source]#
Approximates a KxK convolution by factorizing it into a KxK convolution with fewer channels followed by a 1x1 convolution.
Given a convolutional weight tensor
Wfor a 2d convolution of shape[out_channels, in_channels, k_h, k_w]and a vectorbiasof lengthout_channels, returns a triple(Wa, Wb, new_bias)of tensors with shapes[rank, in_channels, k_h, k_w],[out_channels, rank, 1, 1], and[out_channels], respectively.Wa,Wb, andnew_biasare chosen so as to minimize:\(||\)
(W * X + bias) - (Wb * (Wa * X) + new_bias)\(||_F\),where \(*\) denotes convolution,
biasbroadcasts along all non-channel dimensions, and \(||\cdot||_F\) denotes the sum of squared elements.Similar to
factorize_matrix(), this function allows passing in an already-factorized weight tensor in order to enable progressive factorization. In this case, the single tensorWis replaced with a similar(Wa, Wb)pair as the output, though not necessarily with the same rank.- Parameters
X โ a tensor of shape
[N, in_channels, H, W], for someN,H, andW.Wa โ The first weight tensor to convolve with
X. IfWbis not provided, must be of shape[out_channels, in_channels, k_h, k_w]. Otherwise, must be of shape[original_rank, in_channels, k_h, k_w]for someoriginal_rank < min(in_channels, out_channels).Wb โ The second weight tensor to convolve with the input. If provided, must be of shape
[out_channels, original_rank, 1, 1].rank โ number of channels in the latent representation of
XbiasA โ optional vector of biases. If
WbisNone, must have lengthout_channels. Otherwise must have lengthoriginal_rank.biasB โ if provided, must have length
out_channels.n_iters โ number of iterations used in the optimization process. Higher numbers yield lower mean squared error, though there are usually diminishing returns after a handful of iterations.
**conv2d_kwargs โ arguments such as
padding,stride,dilation,groups, etc used in the original convolution. If these are not provided, the factorized tensors might not preserve the function computed by the original weight tensor as well. Note that not all combinations of arguments are supported.
- Returns
solution โ A
LowRankSolutionof rankrankthat approximates the original convolution operation- Raises
RuntimeError โ If
biasBis provided but notWbis not.NotImplementedError โ if
conv2d_kwargs['dilation'] != 1orconv2d_kwargs['groups'] != 1.
- composer.algorithms.factorize.factorize_core.factorize_matrix(X, Y, Wa, Wb=None, bias=None, rank=0.25, n_iters=3)[source]#
Approximates a matrix by factorizing it into a product of two smaller matrices.
Given a matrix
Wof shape[D, M], a bias vector of lengthM, and a target rankrank < D, returns a solution(Wa, Wb, new_bias)of tensors of shapes[N, rank],[rank, D], andM, respectively. These tensors are chosen so as to minimize:\(||\)
Y - (X @ Wa @ Wb + new_bias)\(||_F\),where
Y = X @ W + bias,@denotes matrix multiplication,new_biasbroadcasts along the row dimension, and \(||\cdot||_F\) denotes the sum of squared elements. In the case that rows ofXcorrespond to samples from some distribution, this amounts to minimizing the expected mean squared error in the output.The input matrix can either be a single matrix
Wor a pair of matrices(Wa, Wb). The latter case corresponds to using a matrixW = Wa @ Wbthat has already been factorized, and is supported in order to facilitate progressively decreasing the rank of matrix.- Parameters
X โ input used to evaluate the quality of the approximation. Shape is
[N, D], whereNis often the number of input samples andDis the dimensionality of each sample.Y โ output of applying the original matrix to
X. Must have shape[N, M]for someM.Wa โ either the matrix to be factorized, or the first of the two smaller matrices in the already-factorized representation of this matrix. Must be of shape
[D, M]in the former case and shape[D, d]in the latter, for somed < D.Wb โ if present,
Wais interpreted as the first of two smaller matrices, andWbis taken to be the second. Must be of shape[d, M].bias โ a vector added to the output after performing the matrix product with X
rank โ number of columns in the latent representation of X
n_iters โ number of iterations used in the optimization process. Higher numbers yield lower mean squared error, though there are usually diminishing returns after a handful of iterations.
- Returns
solution โ a
LowRankSolutionof rankrankthat approximates the original matrix.