๐ชต Logging#
By default, the trainer enables ProgressBarLogger
, which logs
information to a tqdm
progress bar.
To attach other loggers, use the loggers
argument. For example, the
below logs the results to Weights and
Biases and also saves them to the file
log.txt
.
from composer import Trainer
from composer.loggers import WandBLogger, FileLogger
wandb_logger = WandBLogger()
file_logger = FileLogger(filename="log.txt")
trainer = Trainer(
model=model,
train_dataloader=train_dataloader,
eval_dataloader=eval_dataloader,
loggers=[wandb_logger, file_logger],
)
Available Loggers#
Log data to a file. |
|
Log to Weights and Biases (https://wandb.ai/) |
|
Log metrics to the console and optionally show a progress bar. |
|
Logs metrics to dictionary objects that persist in memory throughout training. |
|
Logger destination that uploads artifacts to an object store. |
Automatically Logged Data#
The Trainer
automatically logs the following data:
trainer/algorithms
: a list of specified algorithm names.epoch
: the current epoch.trainer/global_step
: the total number of training steps that have been performed.trainer/batch_idx
: the current training step (batch) within the epoch.loss/train
: the training loss calculated from the current batch.All the validation metrics specified in the
ComposerModel
object passed toTrainer
.
Logging Additional Data#
To log additional data, create a custom Callback
.
Each of its methods has access to the Logger
.
from composer import Callback, State
from composer.loggers import Logger
class EpochMonitor(Callback):
def epoch_end(state: State, logger: Logger):
logger.data_epoch({"Epoch": state.epoch})
Similarly, Algorithm
classes are also provided the Logger
to log any desired information.
See also
Algorithms and Callbacks
Logging Levels#
LogLevel
specifies three logging levels that denote where in
the training loop log messages are generated. The logging levels are:
LogLevel.FIT
: metrics logged once per training run, typically before the first epoch.LogLevel.EPOCH
: metrics logged once per epoch.LogLevel.BATCH
: metrics logged once per batch.
Custom Logger Destinations#
To use a custom logger destination, create a class that inherits from
LoggerDestination
. Here is an example which logs all metrics
into a dictionary:
from typing import Any, Dict
from composer.loggers.logger_destination import LoggerDestination
from composer.loggers.logger import LogLevel
from composer.core.time import Timestamp
from composer.core.state import State
class DictionaryLogger(LoggerDestination):
def __init__(self, log_level: LogLevel = LogLevel.BATCH):
self.log_level = log_level
# Dictionary to store logged data
self.data = {}
def log_data(self, state: State, log_level: LogLevel, data: Dict[str, Any]):
if log_level <= self.log_level:
for k, v in data.items():
if k not in self.data:
self.data[k] = []
self.data[k].append((state.timer.get_timestamp(), log_level, v))
# Construct a trainer using this logger
trainer = Trainer(..., loggers=[DictionaryLogger()])
In addition, LoggerDestination
can also implement the typical event-based
hooks of typical callbacks if needed. See Callbacks for
more information.