# composer.core.precision#

Enum class for the numerical precision to be used by the model.

Functions

 get_precision_context Returns a context manager to automatically cast to a specific precision.

Classes

 Precision Enum class for the numerical precision to be used by the model.
class composer.core.precision.Precision(value)[source]#

Enum class for the numerical precision to be used by the model.

AMP#

Use torch.cuda.amp. Only compatible with GPUs.

FP16#

Use 16-bit floating-point precision. Currently only compatible with GPUs on DeepSpeed.

FP32#

Use 32-bit floating-point precision. Compatible with CPUs and GPUs.

BF16#

Use 16-bit BFloat mixed precision. Requires PyTorch 1.10. Compatible with CPUs and GPUs.

composer.core.precision.get_precision_context(precision)[source]#

Returns a context manager to automatically cast to a specific precision.

Warning

Precision.FP16 is only supported when using DeepSpeed, as PyTorch does not natively support this precision. When this function is invoked with Precision.FP16, the precision context will be a no-op.

Parameters

precision (str | Precision) – Precision for the context