class composer.loggers.LoggerDestination(*args, **kwargs)[source]#

Base class for logger destination.

As this class extends Callback, logger destinations can run on any training loop Event. For example, it may be helpful to run on Event.EPOCH_END to perform any flushing at the end of every epoch.


>>> from composer.loggers import LoggerDestination
>>> from composer.trainer import Trainer
>>> class MyLogger(LoggerDestination):
...     def log_hyperparameters(self, data):
...         print(f'Batch {int(state.timestamp.batch)}: {data}')
>>> logger = MyLogger()
>>> trainer = Trainer(
...     ...,
...     loggers=[logger]
... )
Batch 0: {'rank_zero_seed': ...}

Indicates whether LoggerDestination can log file artifacts.

Defaults to false, should return True for derived logger classes that implement log_file_artifact().


bool โ€“ Whether the class supports logging file artifacts.

get_file_artifact(artifact_name, destination, overwrite=False, progress_bar=True)[source]#

Handle downloading an artifact named artifact_name to destination.

  • artifact_name (str) โ€“ The name of the artifact.

  • destination (str) โ€“ The destination filepath.

  • overwrite (bool) โ€“ Whether to overwrite an existing file at destination. Defaults to False.

  • progress_bar (bool, optional) โ€“ Whether to show a progress bar. Ignored if path is a local file. (default: True)

log_file_artifact(state, log_level, artifact_name, file_path, *, overwrite)[source]#

Handle logging of a file artifact stored at file_path to an artifact named artifact_name.

Subclasses should implement this method to store logged files (e.g. copy it to another folder or upload it to an object store), then it should implement this method. However, not all loggers need to implement this method. For example, the TQDMLogger does not implement this method, as it cannot handle file artifacts.


  • This method will block the training loop. For optimal performance, it is recommended that this method copy the file to a temporary directory, enqueue the copied file for processing, and return. Then, use a background thread(s) or process(s) to read from this queue to perform any I/O.

  • After this method returns, training can resume, and the contents of file_path may change (or be may deleted). Thus, if processing the file in the background (as is recommended), it is necessary to first copy the file to a temporary directory. Otherwise, the original file may no longer exist, or the logged artifact can be corrupted (e.g., if the logger destination is reading from file while the training loop is writing to it).

See also

Artifact Logging for notes for file artifact logging.

  • state (State) โ€“ The training state.

  • log_level (Union[str, LogLevel]) โ€“ A LogLevel.

  • artifact_name (str) โ€“ The name of the artifact.

  • file_path (Path) โ€“ The file path.

  • overwrite (bool, optional) โ€“ Whether to overwrite an existing artifact with the same artifact_name. (default: False)


Log hyperparameters, configurations, and settings.

Logs any parameter/configuration/setting that doesnโ€™t vary during the run.


hyperparameters (Dict[str, Any]) โ€“ A dictionary mapping hyperparameter names (strings) to their values (Any).

log_metrics(metrics, step=None)[source]#

Log metrics or parameters that vary during training.

  • metrics (Dict[str, float]) โ€“ Dictionary mapping metric name (str) to metric scalar value (float)

  • step (Optional[int], optional) โ€“ The current step or batch of training at the time of logging. Defaults to None. If not specified the specific LoggerDestination implementation will choose a step (usually a running counter).


Log traces. Logs any debug-related data like algorithm traces.


traces (Dict[str, float]) โ€“ Dictionary mapping trace names (str) to trace (Any).