write_huggingface_pretrained_from_composer_checkpoint#

composer.models.write_huggingface_pretrained_from_composer_checkpoint(checkpoint_path, output_folder, local_checkpoint_save_location=None)[source]#

Write a config.json and pytorch_model.bin, like transformers.PreTrainedModel.from_pretrained() expects, from a composer checkpoint

Note

This function will not work properly if you used surgery algorithms when you trained your model. In that case you will want to load the model weights using the Composer Trainer with the load_path argument.

Example:

from composer.models import write_huggingface_pretrained_from_composer_checkpoint

write_huggingface_pretrained_from_composer_checkpoint('composer-hf-checkpoint.pt', './hf-save-pretrained-output')
loaded_model = transformers.AutoModelForSequenceClassification.from_pretrained('./hf-save-pretrained-output')
Parameters
  • checkpoint_path (Union[Path, str]) โ€“ Path to the composer checkpoint, can be a local path, or a remote path beginning with s3://, or another backend supported by composer.utils.maybe_create_object_store_from_uri().

  • output_folder (Union[Path, str]) โ€“ Path to the folder to write the output to. Must be a local path.

  • local_checkpoint_save_location (Optional[Union[Path, str]], optional) โ€“ If specified, where to save the checkpoint file to locally. If the input checkpoint_path is already a local path, this will be a symlink. Defaults to None, which will use a temporary file.