ExportForInferenceCallback#
- class composer.callbacks.ExportForInferenceCallback(save_format, save_path, save_object_store=None, sample_input=None, transforms=None)[source]#
Callback to export model for inference.
Example
>>> from composer import Trainer >>> from composer.callbacks import ExportForInferenceCallback >>> # constructing trainer object with this callback >>> trainer = Trainer( ... model=model, ... train_dataloader=train_dataloader, ... eval_dataloader=eval_dataloader, ... optimizers=optimizer, ... max_duration="1ep", ... callbacks=[ExportForInferenceCallback(save_format='torchscript',save_path='/tmp/model.pth')], ... )
- Parameters
save_format (Union[str, ExportFormat]) โ Format to export to. Either
"torchscript"
or"onnx"
.save_path (str) โ The path for storing the exported model. It can be a path to a file on the local disk, a URL, or if
save_object_store
is set, the object name in a cloud bucket. For example,my_run/exported_model
.save_object_store (ObjectStore, optional) โ If the
save_path
is in an object name in a cloud bucket (i.e. AWS S3 or Google Cloud Storage), an instance ofObjectStore
which will be used to store the exported model. If this is set toNone
, will save tosave_path
using the logger. (default:None
)sample_input (Any, optional) โ Example model inputs used for tracing. This is needed for โonnxโ export
transforms (Sequence[Transform], optional) โ transformations (usually optimizations) that should be applied to the model. Each Transform should be a callable that takes a model and returns a modified model.