broadcast#

composer.utils.dist.broadcast(tensor, src, group=None)[source]#

Broadcasts the tensor to the whole group.

tensor must have the same number of elements in all processes participating in the collective. See torch.distributed.broadcast().

Parameters
  • tensor (Tensor) โ€“ Data to be sent if src is the rank of current process, and tensor to be used to save received data otherwise.

  • src (int) โ€“ Source rank

  • group (ProcessGroup, optional) โ€“ The process group to work on. If None, the default process group will be used. Default is None.