# composer.algorithms.layer_freezing.layer_freezing#

Core Layer Freezing classes and functions.

Functions

 freeze_layers Progressively freeze the layers of the network in-place during training, starting with the earlier layers.

Classes

 LayerFreezing Progressively freeze the layers of the network during training, starting with the earlier layers.
class composer.algorithms.layer_freezing.layer_freezing.LayerFreezing(freeze_start=0.5, freeze_level=1.0)[source]#

Progressively freeze the layers of the network during training, starting with the earlier layers.

Freezing starts after the fraction of training specified by freeze_start has elapsed. The fraction of layers frozen increases linearly until it reaches freeze_level at the end of training.

This freezing schedule is most similar to FreezeOut and Freeze Training.

Runs on EPOCH_END.

Example

from composer.algorithms import LayerFreezing
from composer.trainer import Trainer
layer_freezing_algorithm = LayerFreezing(
freeze_start=0.0,
freeze_level=1.0
)
trainer = Trainer(
model=model,
max_duration="1ep",
algorithms=[layer_freezing_algorithm],
optimizers=[optimizer]
)

Parameters
• freeze_start (float) – The fraction of training to run before freezing begins. Default: 0.5.

• freeze_level (float) – The maximum fraction of layers to freeze. Default: 1.0.

property find_unused_parameters[source]#

Override in order to tell DDP that some parameters will not have gradients computed for them after layer freezing is applied.

composer.algorithms.layer_freezing.layer_freezing.freeze_layers(model, optimizers, current_duration, freeze_start=0.5, freeze_level=1.0)[source]#

Progressively freeze the layers of the network in-place during training, starting with the earlier layers.

Example

from composer.algorithms.layer_freezing import freeze_layers
freeze_depth, feeze_level = freeze_layers(
model=model,
optimizers=optimizer,
current_duration=0.5,
freeze_start=0.0,
freeze_level=1.0
)

Parameters
• model (Module) – The model being trained.

• optimizers (Optimizer | Sequence[Optimizer]) – The optimizers used during training.

• current_duration (float) – The fraction, in [0, 1) of the training process complete.

• freeze_start (float, optional) – The fraction of the training process in [0, 1) to run before freezing begins. Default: 0.5.

• freeze_level (float, optional) – The maximum fraction of layers on [0, 1) to freeze. Default: 1.0.

Returns

The number of layers frozen, and the percentage of the total model frozen.

Return type

(int, float)