prepare_tp_module#

composer.distributed.prepare_tp_module(model, tp_config)[source]#

Prepare a module (assumed ComposerModel) for use with tensor parallel.