composer_deeplabv3#
- composer.models.composer_deeplabv3(num_classes, backbone_arch='resnet101', backbone_weights=None, sync_bn=True, use_plus=True, ignore_index=- 1, cross_entropy_weight=1.0, dice_weight=0.0, initializers=())[source]#
- Helper function to create a
ComposerClassifierwith a DeepLabv3(+) model. Logs Mean Intersection over Union (MIoU) and Cross Entropy during training and validation.
- From Rethinking Atrous Convolution for Semantic Image Segmentation
(Chen et al, 2017).
- Parameters
num_classes (int) โ Number of classes in the segmentation task.
backbone_arch (str, optional) โ The architecture to use for the backbone. Must be either [
'resnet50','resnet101']. Default:'resnet101'.backbone_weights (str, optional) โ If specified, the PyTorch pre-trained weights to load for the backbone. Currently, only [โIMAGENET1K_V1โ, โIMAGENET1K_V2โ] are supported. Default:
None.sync_bn (bool, optional) โ If
True, replace all BatchNorm layers with SyncBatchNorm layers. Default:True.use_plus (bool, optional) โ If
True, use DeepLabv3+ head instead of DeepLabv3. Default:True.ignore_index (int) โ Class label to ignore when calculating the loss and other metrics. Default:
-1.cross_entropy_weight (float) โ Weight to scale the cross entropy loss. Default:
1.0.dice_weight (float) โ Weight to scale the dice loss. Default:
0.0.initializers (List[Initializer], optional) โ Initializers for the model.
[]for no initialization. Default:[].
- Returns
ComposerModel โ instance of
ComposerClassifierwith a DeepLabv3(+) model.
Example:
from composer.models import composer_deeplabv3 model = composer_deeplabv3(num_classes=150, backbone_arch='resnet101', backbone_weights=None)
- Helper function to create a