prepare_fsdp_module#

composer.distributed.prepare_fsdp_module(model, optimizers, fsdp_config, precision=None, device=None, auto_microbatching=False, te_rng_seed=1234)[source]#

Prepare a module (assumed ComposerModel) and optimizer for use with torch.distributed.fsdp.FullyShardedDataParallel.

Parameters
  • model (Module) โ€“ The model to wrap.

  • optimizers (Optimizer | Sequence[Optimizer], optional) โ€“ The optimizer for model, assumed to have a single param group := model.parameters().

  • fsdp_config (FSDPConfig) โ€“ The FSDP config.

  • precision โ€“ (Precision): The precision being used by the Trainer, used to fill in defaults for FSDP mixed_precision settings.

  • device โ€“ The device being used by the Trainer.

  • auto_microbatching (bool, optional) โ€“ Whether or not auto microbatching is enabled.

  • te_rng_seed (int) โ€“ The seed to use for the Transformer Engine activation checkpointing RNG. Defaults to 1234.