prepare_ddp_module#
- composer.distributed.prepare_ddp_module(module, find_unused_parameters)[source]#
Wraps the module in a
torch.nn.parallel.DistributedDataParallel
object if running distributed training.
Wraps the module in a torch.nn.parallel.DistributedDataParallel
object if running distributed training.