Precision#
- class composer.core.Precision(value)[source]#
Enum class for the numerical precision to be used by the model.
- FP32#
Use 32-bit floating-point precision. Compatible with CPUs and GPUs.
- AMP_FP16#
Use
torch.cuda.amp
wih 16-bit floating-point precision. Only compatible with GPUs.
- AMP_BF16#
Use
torch.cuda.amp
wih 16-bit BFloat precision.