Callback to export model for inference.



Callback to export model for inference.

class composer.callbacks.export_for_inference.ExportForInferenceCallback(save_format, save_path, save_object_store=None, sample_input=None, transforms=None)[source]#

Bases: composer.core.callback.Callback

Callback to export model for inference.


>>> from composer import Trainer
>>> from composer.callbacks import ExportForInferenceCallback
>>> # constructing trainer object with this callback
>>> trainer = Trainer(
...     model=model,
...     train_dataloader=train_dataloader,
...     eval_dataloader=eval_dataloader,
...     optimizers=optimizer,
...     max_duration="1ep",
...     callbacks=[ExportForInferenceCallback(save_format='torchscript',save_path='/tmp/model.pth')],
... )
  • save_format (Union[str, ExportFormat]) โ€“ Format to export to. Either "torchscript" or "onnx".

  • save_path (str) โ€“ The path for storing the exported model. It can be a path to a file on the local disk, a URL, or if save_object_store is set, the object name in a cloud bucket. For example, my_run/exported_model.

  • save_object_store (ObjectStore, optional) โ€“ If the save_path is in an object name in a cloud bucket (i.e. AWS S3 or Google Cloud Storage), an instance of ObjectStore which will be used to store the exported model. If this is set to None, will save to save_path using the logger. (default: None)

  • sample_input (Any, optional) โ€“ Example model inputs used for tracing. This is needed for โ€œonnxโ€ export

  • transforms (Sequence[Transform], optional) โ€“ transformations (usually optimizations) that should be applied to the model. Each Transform should be a callable that takes a model and returns a modified model.