vit_small_patch16#
- composer.models.vit_small_patch16(num_classes=1000, image_size=224, channels=3, dropout=0.0, embedding_dropout=0.0)[source]#
Helper function to create a
ComposerClassifier
using a ViT-S/16 model.- See Training data-efficient image transformers & distillation through attention
(Touvron et al, 2021) for details on ViT-S/16.
- Parameters
num_classes (int, optional) โ number of classes for the model. Default:
1000
.image_size (int, optional) โ input image size. If you have rectangular images, make sure your image size is the maximum of the width and height. Default:
224
.channels (int, optional) โ number of image channels. Default:
3
.dropout (float, optional) โ 0.0 - 1.0 dropout rate. Default:
0
.embedding_dropout (float, optional) โ 0.0 - 1.0 embedding dropout rate. Default:
0
.
- Returns
ComposerModel โ instance of
ComposerClassifier
with a ViT-S/16 model.