Device#

class composer.devices.Device[source]#

Abstract class for a device on which a model runs.

dist_backend#

Distributed backend to use. Should be gloo, mpi, or nccl. See the pytorch docs for details.

Type

str

batch_to_device(batch)[source]#

Invoked by the Trainer move all tensors items in a batch to device.

Supports nested sequences and mappings of tensors. Ignores non-tensor items. Preserves sequence and mapping types when possible; otherwise, sequences are converted to lists, and mappings are converted to dictionaries.

Parameters

batch (Any) โ€“ The batch to move to the device.

Returns

Batch โ€“ The batch on the device.

abstract module_to_device(module)[source]#

Invoked by the Trainer to move a module onto the device.

Parameters

module (Module) โ€“ The module to move to the device.

Returns

torch.nn.Module โ€“ The module on the device.

optimizer_to_device(optimizer)[source]#

Invoked by the Trainer to move the optimizerโ€™s state onto the device.

Parameters

optimizer (Optimizer) โ€“ The optimizer to move to the device

Returns

Optimizer โ€“ The optimizer on the device

abstract tensor_to_device(tensor)[source]#

Invoked by the Trainer to move a tensor onto a device.

Parameters

tensor (Tensor) โ€“ The tensor to move to the device.

Returns

Tensor โ€“ The tensor on the device.