ExportForInferenceCallback#

class composer.callbacks.ExportForInferenceCallback(save_format, save_path, save_object_store=None, sample_input=None, transforms=None, input_names=None, output_names=None)[source]#

Callback to export model for inference.

Example

>>> from composer import Trainer
>>> from composer.callbacks import ExportForInferenceCallback
>>> # constructing trainer object with this callback
>>> trainer = Trainer(
...     model=model,
...     train_dataloader=train_dataloader,
...     eval_dataloader=eval_dataloader,
...     optimizers=optimizer,
...     max_duration="1ep",
...     callbacks=[ExportForInferenceCallback(save_format='torchscript',save_path='/tmp/model.pth')],
... )
Parameters
  • save_format (Union[str, ExportFormat]) โ€“ Format to export to. Either "torchscript" or "onnx".

  • save_path (str) โ€“ The path for storing the exported model. It can be a path to a file on the local disk, a URL, or if save_object_store is set, the object name in a cloud bucket. For example, my_run/exported_model.

  • save_object_store (ObjectStore, optional) โ€“ If the save_path is in an object name in a cloud bucket (i.e. AWS S3 or Google Cloud Storage), an instance of ObjectStore which will be used to store the exported model. If this is set to None, will save to save_path using the logger. (default: None)

  • sample_input (Any, optional) โ€“ Example model inputs used for tracing. This is needed for โ€œonnxโ€ export

  • transforms (Sequence[Transform], optional) โ€“ transformations (usually optimizations) that should be applied to the model. Each Transform should be a callable that takes a model and returns a modified model.

  • input_names (Sequence[str], optional) โ€“ names to assign to the input nodes of the graph, in order. If set to None, the keys from the sample_input will be used. Fallbacks to ["input"].

  • output_names (Sequence[str], optional) โ€“ names to assign to the output nodes of the graph, in order. It set to None, it defaults to ["output"].