Contents Menu Expand Light mode Dark mode Auto light/dark mode
Composer
Light Logo Dark Logo
Star

Getting Started

  • ๐Ÿ’พ Installation
  • ๐Ÿš€ Quick Start
  • ๐ŸšŒ Welcome Tour

Tutorials

  • ๐Ÿ–ผ๏ธ Getting Started
  • ฦ’() Functional API
  • ๐Ÿฉบ Image Segmentation
  • ๐Ÿค– Custom Speedup Methods
  • ๐ŸŽ๏ธ FFCV DataLoaders
  • ๐Ÿค— Hugging Face Models
  • ๐Ÿค— Pretraining and finetuning with HuggingFace models
  • โšก Migrating from PTL
  • ๐Ÿ›‘ Early Stopping
  • โ™ป๏ธ Auto Microbatching
  • โฏ๏ธ Autoresume Training
  • ๐Ÿฅก Exporting for Inference
  • ๐Ÿ”Œ Training with TPUs
  • โ˜๏ธ Train ResNet-50 on AWS

Speedup Methods

  • ๐Ÿค– Algorithms
  • ฦ’() Functional

Trainer

  • โš™๏ธ Using the Trainer
  • ๐Ÿ›ป ComposerModel
  • ๐Ÿ’ฟ DataLoaders
  • ๐Ÿ“Š Evaluation
  • ๐Ÿ“‰ Schedulers
  • โŒ› Time
  • ๐Ÿ“† Events
  • โœ… Checkpointing
  • ๐Ÿชต Logging
  • ๐Ÿ›• File Uploading
  • โ˜Ž๏ธ Callbacks
  • ๐ŸŽ๏ธ Performance
    • โฑ๏ธ Performance Profiling
    • ๐Ÿฅฝ Analyzing Traces

Notes

  • ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Distributed Training
  • ๐Ÿ›‘ Early Stopping
  • ๐Ÿ”ข Numerics
  • โ™ป๏ธ Automatic Microbatching
  • โฏ Auto Resumption
  • ๐Ÿ“‹ Tensorboard Monitoring
  • ๐Ÿƒโ€โ™€๏ธ Run Name

Methods Library

  • ๐Ÿƒ Methods Overview
  • ๐Ÿฅธ ALiBi
  • ๐ŸŽจ AugMix
  • ๐ŸŠ BlurPool
  • ๐Ÿ“บ Channels Last
  • ๐Ÿ›๏ธ ColOut
  • โœ‚๏ธ CutMix
  • ๐ŸŽƒ Cutout
  • ๐Ÿ‹๏ธโ€โ™€๏ธ Decoupled Weight Decay
  • ๐Ÿšš EMA
  • โž— Factorize
  • ๐Ÿฐ Fused LayerNorm
  • โ›ฉ๏ธ Gated Linear Units
  • ๐Ÿ‘ป Ghost BatchNorm
  • ๐Ÿ“Ž Gradient Clipping
  • ๐Ÿฅ™ Gyro Dropout
  • ๐Ÿงˆ Label Smoothing
  • โ„๏ธ Layer Freezing
  • ๐Ÿง Low Precision LayerNorm
  • ๐Ÿฅฃ MixUp
  • ๐Ÿž๏ธ Progressive Image Resizing
  • ๐ŸŽฒ RandAugment
  • ๐Ÿ”๏ธ Sharpness Aware Minimization (SAM)
  • โš–๏ธ Scale Schedule
  • โฎ๏ธ Selective Backprop
  • ๐Ÿ”† Sequence Length Warmup
  • ๐Ÿซ€ Squeeze-and-Excitation
  • ๐ŸงŠ Stochastic Depth (Block)
  • ๐ŸŽฐ Stochastic Depth (Sample)
  • ๐Ÿงฉ Stochastic Weight Averaging
  • ๐„ท Weight Standardization

Model Library

  • ๐Ÿฆญ BERT
  • ๐Ÿ‘€ CIFAR ResNet
  • ๐Ÿคฟ DeepLabv3+
  • ๐Ÿ•ธ๏ธ EfficientNet
  • ๐Ÿ“š GPT-2
  • ๐Ÿ™๏ธ ResNet
  • โ†ฉ๏ธ UNet

API Reference

  • composer
    • Algorithm
    • Callback
    • ComposerModel
    • DataSpec
    • Engine
    • Evaluator
    • Event
    • Logger
    • State
    • Time
    • TimeUnit
    • Timestamp
    • Trainer
  • composer.algorithms
    • Alibi
    • AugMix
    • AugmentAndMixTransform
    • BlurPool
    • ChannelsLast
    • ColOut
    • ColOutTransform
    • CutMix
    • CutOut
    • EMA
    • Factorize
    • FusedLayerNorm
    • GatedLinearUnits
    • GhostBatchNorm
    • GradientClipping
    • GyroDropout
    • LabelSmoothing
    • LayerFreezing
    • LowPrecisionLayerNorm
    • MixUp
    • NoOpModel
    • ProgressiveResizing
    • RandAugment
    • RandAugmentTransform
    • SAM
    • SWA
    • SelectiveBackprop
    • SeqLengthWarmup
    • SqueezeExcite
    • SqueezeExcite2d
    • SqueezeExciteConv2d
    • StochasticDepth
    • WeightStandardization
  • composer.callbacks
    • CheckpointSaver
    • EarlyStopper
    • ExportForInferenceCallback
    • ImageVisualizer
    • LRMonitor
    • MLPerfCallback
    • MemoryMonitor
    • OptimizerMonitor
    • SpeedMonitor
    • ThresholdStopper
  • composer.core
    • ensure_data_spec
    • ensure_evaluator
    • ensure_time
    • get_precision_context
    • MemoryFormat
    • Precision
    • Serializable
    • Trace
    • TrainerMode
    • composer.core.BreakEpochException
  • composer.core.types
    • MemoryFormat
    • TrainerMode
    • composer.core.types.BreakEpochException
  • composer.datasets
    • build_ade20k_dataloader
    • build_cifar10_dataloader
    • build_ffcv_cifar10_dataloader
    • build_ffcv_imagenet_dataloader
    • build_imagenet_dataloader
    • build_lm_dataloader
    • build_mnist_dataloader
    • build_streaming_ade20k_dataloader
    • build_streaming_c4_dataloader
    • build_streaming_cifar10_dataloader
    • build_streaming_imagenet1k_dataloader
    • build_synthetic_ade20k_dataloader
    • build_synthetic_cifar10_dataloader
    • build_synthetic_imagenet_dataloader
    • build_synthetic_lm_dataloader
    • build_synthetic_mnist_dataloader
    • ADE20k
    • PytTrain
    • PytVal
    • SyntheticBatchPairDataset
    • SyntheticDataLabelType
    • SyntheticDataType
    • SyntheticPILDataset
  • composer.devices
    • Device
    • DeviceCPU
    • DeviceGPU
    • DeviceMPS
    • DeviceTPU
  • composer.functional
    • apply_alibi
    • apply_blurpool
    • apply_channels_last
    • apply_factorization
    • apply_fused_layernorm
    • apply_gated_linear_units
    • apply_ghost_batchnorm
    • apply_gradient_clipping
    • apply_gyro_dropout
    • apply_low_precision_layernorm
    • apply_squeeze_excite
    • apply_stochastic_depth
    • apply_weight_standardization
    • augmix_image
    • colout_batch
    • compute_ema
    • cutmix_batch
    • cutout_batch
    • freeze_layers
    • mixup_batch
    • randaugment_image
    • resize_batch
    • select_using_loss
    • set_batch_sequence_length
    • should_selective_backprop
    • smooth_labels
  • composer.loggers
    • CometMLLogger
    • ConsoleLogger
    • FileLogger
    • InMemoryLogger
    • LoggerDestination
    • MLFlowLogger
    • ProgressBarLogger
    • RemoteUploaderDownloader
    • TensorboardLogger
    • WandBLogger
  • composer.loss
    • binary_cross_entropy_with_logits
    • soft_cross_entropy
    • DiceLoss
  • composer.metrics
    • BinaryF1Score
    • CrossEntropy
    • Dice
    • HFCrossEntropy
    • LanguageCrossEntropy
    • LossMetric
    • MAP
    • MIoU
    • MaskedAccuracy
    • Perplexity
  • composer.models
    • composer_deeplabv3
    • composer_efficientnetb0
    • composer_resnet
    • composer_resnet_cifar
    • composer_timm
    • create_bert_classification
    • create_bert_mlm
    • create_gpt2
    • mnist_model
    • vit_small_patch16
    • ComposerClassifier
    • HuggingFaceModel
    • Initializer
    • MMDetModel
    • UNet
  • composer.optim
    • compile_composer_scheduler
    • ComposerScheduler
    • ConstantScheduler
    • ConstantWithWarmupScheduler
    • CosineAnnealingScheduler
    • CosineAnnealingWarmRestartsScheduler
    • CosineAnnealingWithWarmupScheduler
    • DecoupledAdamW
    • DecoupledSGDW
    • ExponentialScheduler
    • LinearScheduler
    • LinearWithWarmupScheduler
    • MultiStepScheduler
    • MultiStepWithWarmupScheduler
    • PolynomialScheduler
    • PolynomialWithWarmupScheduler
    • StepScheduler
  • composer.profiler
    • cyclic_schedule
    • JSONTraceHandler
    • Marker
    • Profiler
    • ProfilerAction
    • SystemProfiler
    • TorchProfiler
    • TraceHandler
  • composer.utils
    • batch_get
    • batch_set
    • configure_excepthook
    • create_symlink_file
    • disable_env_report
    • enable_env_report
    • ensure_folder_has_no_conflicting_files
    • ensure_folder_is_empty
    • ensure_tuple
    • export_for_inference
    • export_with_logger
    • format_name_with_dist
    • format_name_with_dist_and_time
    • get_composer_env_dict
    • get_device
    • get_file
    • get_free_tcp_port
    • import_object
    • is_model_deepspeed
    • is_model_fsdp
    • is_notebook
    • is_tar
    • is_tpu_installed
    • load_checkpoint
    • map_collection
    • maybe_create_object_store_from_uri
    • maybe_create_remote_uploader_downloader_from_uri
    • model_eval_mode
    • parse_uri
    • print_env
    • retry
    • save_checkpoint
    • ExportFormat
    • IteratorFileStream
    • LibcloudObjectStore
    • OCIObjectStore
    • ObjectStore
    • PartialFilePath
    • S3ObjectStore
    • SFTPObjectStore
    • StringEnum
    • composer.utils.MissingConditionalImportError
    • composer.utils.ObjectStoreTransientError
  • composer.utils.dist
    • all_gather
    • all_gather_object
    • all_reduce
    • barrier
    • broadcast
    • broadcast_object_list
    • get_global_rank
    • get_local_rank
    • get_local_world_size
    • get_node_rank
    • get_sampler
    • get_world_size
    • initialize_dist
    • is_available
    • is_initialized
  • composer.utils.reproducibility
    • configure_deterministic_mode
    • get_random_seed
    • get_rng_state
    • load_rng_state
    • seed_all
  v: v0.12.0
Versions
latest
stable
v0.12.0
v0.11.1
v0.11.0
v0.10.1
v0.10.0
v0.9.0
v0.8.2
v0.8.1
v0.8.0
v0.7.1
v0.7.0
v0.6.1
v0.6.0
v0.5.0
v0.4.0
v0.3.1
v0.3.0
v0.2.4
v0.2.3
Downloads
On Read the Docs
Project Home
Builds
Back to top
Edit this page

print_env#

composer.utils.print_env(file=None)[source]#

Generate system information report.

Example: .. code-block:: python

from composer.utils.collect_env import print_env

print_env()

Sample Report:

---------------------------------
System Environment Report
Created: 2022-04-27 00:25:33 UTC
---------------------------------

PyTorch information
-------------------
PyTorch version: 1-91+cu111
Is debug build: False
CUDA used to build PyTorch: 111
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27

Python version: 3.8 (64-bit runtime)
Python platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3080
GPU 1: NVIDIA GeForce RTX 3080
GPU 2: NVIDIA GeForce RTX 3080
GPU 3: NVIDIA GeForce RTX 3080

Nvidia driver version: 470.57.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-ranger==0.1.1
[pip3] torch==1.9.1+cu111
[pip3] torch-optimizer==0.1.0
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.10.1+cu111
[pip3] vit-pytorch==0.27.0
[conda] Could not collect


Composer information
--------------------
Composer version: 0.8.2
Composer commit hash: 9e14a47562def0baa414242c36954eb3083dcd46
Host processor model name: AMD EPYC 7502 32-Core Processor
Host processor core count: 64
Number of nodes: 1
Accelerator model name: NVIDIA GeForce RTX 3080
Accelerators per node: 1
CUDA Device Count: 4
Parameters

file (TextIO, optional) โ€“ File handle, sys.stdout or sys.stderr. Defaults to sys.stdout.

Next
retry
Previous
parse_uri
Copyright © 2022, MosaicML, Inc.
Made with Sphinx and @pradyunsg's Furo