composer.trainer.trainer_hparams#
The Hparams used to construct the Trainer.
Hparams
These classes are used with yahp for YAML-based configuration.
|
Params for instantiating the |
- class composer.trainer.trainer_hparams.TrainerHparams(model, train_dataset, train_batch_size, dataloader, max_duration, datadir=None, val_dataset=None, eval_batch_size=None, evaluators=None, algorithms=<factory>, optimizer=None, schedulers=<factory>, device=<factory>, grad_accum=1, grad_clip_norm=None, validate_every_n_epochs=1, validate_every_n_batches=-1, compute_training_metrics=False, precision=Precision.AMP, scale_schedule_ratio=1.0, step_schedulers_every_batch=True, dist_timeout=15.0, ddp_sync_strategy=None, seed=None, deterministic_mode=False, callbacks=<factory>, loggers=<factory>, python_log_level='INFO', run_name=None, progress_bar=True, log_to_console=None, console_log_level=LogLevel.EPOCH, console_stream='stderr', load_path=None, load_object_store=None, load_weights_only=False, load_strict_model_weights=False, load_chunk_size=1048576, load_progress_bar=True, save_folder=None, save_filename='ep{epoch}-ba{batch}-rank{rank}', save_artifact_name='{run_name}/checkpoints/ep{epoch}-ba{batch}-rank{rank}', save_latest_filename='latest-rank{rank}', save_latest_artifact_name='{run_name}/checkpoints/latest-rank{rank}', save_overwrite=False, save_weights_only=False, save_interval='1ep', save_num_checkpoints_to_keep=-1, train_subset_num_batches=None, eval_subset_num_batches=None, deepspeed=None, prof_trace_handlers=<factory>, prof_schedule=None, sys_prof_cpu=True, sys_prof_memory=False, sys_prof_disk=False, sys_prof_net=False, sys_prof_stats_thread_interval_seconds=0.5, torch_prof_folder='{run_name}/torch_traces', torch_prof_filename='rank{rank}.{batch}.pt.trace.json', torch_prof_artifact_name='{run_name}/torch_traces/rank{rank}.{batch}.pt.trace.json', torch_prof_overwrite=False, torch_prof_use_gzip=False, torch_prof_num_traces_to_keep=-1, torch_prof_record_shapes=False, torch_prof_profile_memory=False, torch_prof_with_stack=False, torch_prof_with_flops=False)[source]#
Bases:
yahp.hparams.HparamsParams for instantiating the
Trainer.See also
The documentation for the
Trainer.- Parameters
model (ModelHparams) โ
Hparams for constructing the model to train.
See also
composer.modelsfor models built into Composer.train_dataset (DatasetHparams) โ
Hparams used to construct the dataset used for training.
See also
composer.datasetsfor datasets built into Composer.train_batch_size (int) โ The optimization batch size to use for training. This is the total batch size that is used to produce a gradient for the optimizer update step.
dataloader (DataLoaderHparams) โ Hparams used for constructing the dataloader which will be used for loading the train dataset and (if provided) the validation dataset.
max_duration (str) โ
The maximum duration to train as a str (e.g.
1ep, or10ba). Will be converted to aTimeobject.See also
Timefor more details on time construction.datadir (str, optional) โ Datadir to apply for both the training and validation datasets. If specified, it will override both
train_dataset.datadirandval_dataset.datadir. (default:None)val_dataset (DatasetHparams, optional) โ
Hparams for constructing the dataset used for evaluation. (default:
None)See also
composer.datasetsfor datasets built into Composer.eval_batch_size (int, optional) โ The batch size to use for evaluation. Must be provided if one of
val_datasetorevaluatorsis set. (default:None)evaluators (List[EvaluatorHparams], optional) โ
Hparams for constructing evaluators to be used during the eval loop. Evaluators should be used when evaluating one or more specific metrics across one or more datasets. (default:
None)See also
Evaluatorfor more details on evaluators.algorithms (List[AlgorithmHparams], optional) โ
The algorithms to use during training. (default:
[])See also
composer.algorithmsfor the different algorithms built into Composer.optimizers (OptimizerHparams, optional) โ
The hparams for constructing the optimizer. (default:
None)See also
Trainerfor the default optimizer behavior whenNoneis provided.See also
composer.optimfor the different optimizers built into Composer.schedulers (List[SchedulerHparams], optional) โ
The learning rate schedulers. (default:
[]).See also
Trainerfor the default scheduler behavior when[]is provided.See also
composer.optim.schedulerfor the different schedulers built into Composer.device (DeviceHparams) โ Hparams for constructing the device used for training. (default:
CPUDeviceHparams)step_schedulers_every_batch (bool, optional) โ See
Trainer.ddp_sync_strategy (DDPSyncStrategy, optional) โ See
Trainer.loggers (List[LoggerDestinationHparams], optional) โ
Hparams for constructing the destinations to log to. (default:
[])See also
composer.loggersfor the different loggers built into Composer.python_log_level (str) โ
The Python log level to use for log statements in the
composermodule. (default:INFO)See also
The
loggingmodule in Python.callbacks (List[CallbackHparams], optional) โ
Hparams to construct the callbacks to run during training. (default:
[])See also
composer.callbacksfor the different callbacks built into Composer.load_object_store (ObjectStore, optional) โ See
Trainer.save_folder (str, optional) โ See
CheckpointSaver.save_filename (str, optional) โ See
CheckpointSaver.save_artifact_name (str, optional) โ See
CheckpointSaver.save_latest_filename (str, optional) โ See
CheckpointSaver.save_latest_artifact_name (str, optional) โ See
CheckpointSaver.save_overwrite (str, optional) โ See
CheckpointSaver.save_weights_only (bool, optional) โ See
CheckpointSaver.save_interval (str, optional) โ See
CheckpointSaverHparams.save_num_checkpoints_to_keep (int, optional) โ See
CheckpointSaver.deepspeed_config (Dict[str, JSON], optional) โ If set to a dict will be used for as the DeepSpeed config for training (see
Trainerfor more details). IfNonewill passFalseto the trainer for thedeepspeed_configparameter signaling that DeepSpeed will not be used for training.prof_schedule (ProfileScheduleHparams, optional) โ Profile schedule hparams. Must be specified to enable the profiler.
prof_trace_handlers (List[TraceHandlerHparams], optional) โ See
Trainer. Must be specified to enable the profiler. prof_skip_first (int, optional): SeeTrainer. prof_wait (int, optional): SeeTrainer.sys_prof_stats_thread_interval_seconds (float, optional) โ See
Trainer.torch_prof_folder (str, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_filename (str, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_artifact_name (str, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_overwrite (bool, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_use_gzip (bool, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_record_shapes (bool, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_profile_memory (bool, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_with_stack (bool, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_with_flops (bool, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.torch_prof_num_traces_to_keep (int, optional) โ See
TorchProfiler. Ignored ifprof_scheduleandprof_trace_handlersare not specified.