- class composer.algorithms.SeqLengthWarmup(duration=0.3, min_seq_length=8, max_seq_length=1024, step_size=8, truncate=True, preserve_end_of_sequence=False)#
Progressively increases the sequence length during training.
Changes the sequence length of all tensors in the input batch. The sequence length increases from
max_seq_lengthin steps of
step_sizeduring the first
durationfraction of training.
The sequence length is then kept at
max_seq_lengthfor the rest of training.
Tensors are either truncated (
truncate=True) or reshaped to create new examples from the extra tokens (
This algorithm runs on
Event.AFTER_DATALOADERto modify the sequence length of a batch of data after the model and data have been moved to accelerators.
step_sizeshould be a multiple of eight for optimal throughput on NVIDIA GPUs.
Variable input lengths can create CUDA OOM errors. To avoid this, we follow the PyTorch notes and pre-allocate the memory with a blank forward and backward pass.
See the Method Card for more details.
from composer.algorithms import SeqLengthWarmup from composer import Trainer seq_length_warmup = SeqLengthWarmup(duration=0.5, min_seq_length=8, max_seq_length=1024, step_size=8, truncate=True, preserve_end_of_sequence=False) trainer = Trainer(model=model, train_dataloader=train_dataloader, max_duration="1ep", algorithms=[seq_length_warmup])
duration (float, optional) – Fraction of total training for sequential length learning. Default =
min_seq_length (int, optional) – Minimum sequence length to start the warmup. Default =
max_seq_length (int, optional) – Maximum sequence length to stop the warmup. Default =
step_size (int, optional) – Step size of sequence length. Default =
truncate (bool, optional) – Truncate sequences early, or reshape tensors to create new examples out of the extra tokens. Default:
preserve_end_of_sequence (bool, optional) – Preserve the end-of-sequence of the batch when truncating. Useful when input formats include a unique end-of-sequence token. Ignored if
False. E.g., if
[[10, 11, 12, 13, 14, 15]]and
"input_ids"in the returned batch would be
[[10, 11, 12]]with
preserve_end_of_sequence=Falseand would be
[[10, 11, 15]]with
preserve_end_of_sequence=True. This behavior applies to any batch tensor with 2 or more dimensions.