# ConstantWithWarmupScheduler#

class composer.optim.ConstantWithWarmupScheduler(t_warmup, alpha=1.0, t_max='1dur', scale_warmup=False)[source]#

Maintains a fixed learning rate, with an initial warmup.

This scheduler is based on ConstantLR from PyTorch, with an added warmup.

Starts with a linear warmup over t_warmup time, then simply maintains a learning rate factor of 1 for the entire training duration. However, both the factor and the duration of this scheduler can be configured.

Specifically, the learning rate multiplier $$\alpha$$ can be expressed as:

$\alpha(t) = \begin{cases} t / t_{warmup}, & \text{if } t < t_{warmup} \\ \alpha, & \text{if } t < t_{max} \\ 1.0 & \text{otherwise} \end{cases}$

Where $$\alpha$$ represents the learning rate multiplier to maintain while this scheduler is active, and $$t_{max}$$ represents the duration of this scheduler.

Warning

By default, initial warmup time is not scaled according to any provided scale schedule ratio. To change this behavior, set scale_warmup=True.

Parameters
• t_warmup (str | Time) – Warmup time.

• alpha (float) – Learning rate multiplier to maintain while this scheduler is active. Default = 1.0.

• t_max (str | Time) – Duration of this scheduler. Default = "1dur".

• scale_warmup (float) – SSR also scales the warmup period. Default = False.