# Precision#

class composer.core.Precision(value)[source]#

Enum class for the numerical precision to be used by the model.

AMP#

Use torch.cuda.amp. Only compatible with GPUs.

FP16#

Use 16-bit floating-point precision. Currently only compatible with GPUs on DeepSpeed.

FP32#

Use 32-bit floating-point precision. Compatible with CPUs and GPUs.

BF16#

Use 16-bit BFloat mixed precision. Compatible with CPUs and GPUs.