Vision Transformer and MLP-Mixer Architectures
In this repository we release models from the papers
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
- MLP-Mixer: An all-MLP Architecture for Vision
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
- When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations
- LiT: Zero-Shot Transfer with Locked-image text Tuning
- Surrogate Gap Minimization Improves Sharpness-Aware Training
The models were pre-trained on the ImageNet and ImageNet-21k datasets. We provide the code for fine-tuning the released models in JAX/Flax.
The models from this codebase were originally trained in https://github.com/google-research/big_vision/ where you can find more advanced code (e.g. multi-host training), as well as some of the original training scripts (e.g. configs/vit_i21k.py for pre-training a ViT, or configs/transfer.py for transfering a model).
Table of contents:
Colab
Below Colabs run both with GPUs, and TPUs (8 cores, data parallelism).
The first Colab demonstrates the JAX code of Vision Transformers and MLP Mixers. This Colab allows you to edit the files from the repository directly in the Colab UI and has annotated Colab cells that walk you through the code step by step, and lets you interact with the data.
https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb
The second Colab allows you to explore the >50k Vision Transformer and hybrid
checkpoints that were used to generate the data of the third paper "How to train
your ViT? ...". The Colab includes code to explore and select checkpoints, and
to do inference both using the JAX code from this repo, and also using the
popular timm
PyTorch library that can directly load these checkpoints as
well. Note that a handful of models are also available directly from TF-Hub:
sayakpaul/collections/vision_transformer (external contribution by Sayak
Paul).
The second Colab also lets you fine-tune the checkpoints on any tfds dataset and your own dataset with examples in individual JPEG files (optionally directly reading from Google Drive).
Note: As for now (6/20/21) Google Colab only supports a single GPU (Nvidia Tesla T4), and TPUs (currently TPUv2-8) are attached indirectly to the Colab VM and communicate over slow network, which leads to pretty bad training speed. You would usually want to set up a dedicated machine if you have a non-trivial amount of data to fine-tune on. For details see the Running on cloud section.
Installation
Make sure you have Python>=3.10
installed on your machine.
Install JAX and python dependencies by running:
# If using GPU:
pip install -r vit_jax/requirements.txt
# If using TPU:
pip install -r vit_jax/requirements-tpu.txt
For newer versions of JAX, follow the instructions provided in the corresponding repository linked here. Note that installation instructions for CPU, GPU and TPU differs slightly.
Install Flaxformer, follow the instructions provided in the corresponding repository linked here.
For more details refer to the section Running on cloud below.
Fine-tuning a model
You can run fine-tuning of the downloaded model on your dataset of interest. All models share the same command line interface.
For example for fine-tuning a ViT-B/16 (pre-trained on imagenet21k) on CIFAR10
(note how we specify b16,cifar10
as arguments to the config, and how we
instruct the code to access the models directly from a GCS bucket instead of
first downloading them into the local directory):
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k'
In order to fine-tune a Mixer-B/16 (pre-trained on imagenet21k) on CIFAR10:
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/mixer_base16_cifar10.py \
--config.pretrained_dir='gs://mixer_models/imagenet21k'
The "How to train your ViT? ..." paper added >50k checkpoints that you can
fine-tune with the [configs/augreg.py
] config. When you only specify the model
name (the config.name
value from [configs/model.py
]), then the best i21k
checkpoint by upstream validation accuracy ("recommended" checkpoint, see
section 4.5 of the paper) is chosen. To make up your mind which model you want
to use, have a look at Figure 3 in the paper. It's also possible to choose a
different checkpoint (see Colab [vit_jax_augreg.ipynb
]) and then specify the
value from the filename
or adapt_filename
column, which correspond to the
filenames without .npz
from the [gs://vit_models/augreg
] directory.
python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/augreg.py:R_Ti_16 \
--config.dataset=oxford_iiit_pet \
--config.base_lr=0.01
Currently, the code will automatically download CIFAR-10 and CIFAR-100 datasets.
Other public or custom datasets can be easily integrated, using tensorflow
datasets library. Note that you will
also need to update vit_jax/input_pipeline.py
to specify some parameters about
any added dataset.
Note that our code uses all available GPUs/TPUs for fine-tuning.
To see a detailed list of all available flags, run python3 -m vit_jax.train --help
.
Notes on memory:
- Different models require different amount of memory. Available memory also
depends on the accelerator configuration (both type and count). If you
encounter an out-of-memory error you can increase the value of
--config.accum_steps=8
-- alternatively, you could also decrease the--config.batch=512
(and decrease--config.base_lr
accordingly). - The host keeps a shuffle buffer in memory. If you encounter a host OOM (as
opposed to an accelerator OOM), you can decrease the default
--config.shuffle_buffer=50000
.
Vision Transformer
by Alexey Dosovitskiy*†, Lucas Beyer*, Alexander Kolesnikov*, Dirk Weissenborn*, Xiaohua Zhai*, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit and Neil Houlsby*†.
(*) equal technical contribution, (†) equal advising.
Overview of the model: we split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable "classification token" to the sequence.
Available ViT models
We provide a variety of ViT models in different GCS buckets. The models can be downloaded with e.g.:
wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz
The model filenames (without the .npz
extension) correspond to the
config.model_name
in [vit_jax/configs/models.py
]
- [
gs://vit_models/imagenet21k
] - Models pre-trained on ImageNet-21k. - [
gs://vit_models/imagenet21k+imagenet2012
] - Models pre-trained on ImageNet-21k and fine-tuned on ImageNet. - [
gs://vit_models/augreg
] - Models pre-trained on ImageNet-21k, applying varying amounts of [AugReg]. Improved performance. - [
gs://vit_models/sam
] - Models pre-trained on ImageNet with [SAM]. - [
gs://vit_models/gsam
] - Models pre-trained on ImageNet with [GSAM].
We recommend using the following checkpoints, trained with [AugReg] that have the best pre-training metrics:
Model | Pre-trained checkpoint | Size | Fine-tuned checkpoint | Resolution | Img/sec | Imagenet accuracy |
---|---|---|---|---|---|---|
L/16 | gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz | 1243 MiB | gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 50 | 85.59% |
B/16 | gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz | 391 MiB | gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 138 | 85.49% |
S/16 | gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz | 115 MiB | gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 300 | 83.73% |
R50+L/32 | gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz | 1337 MiB | gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 327 | 85.99% |
R26+S/32 | gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz | 170 MiB | gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 560 | 83.85% |
Ti/16 | gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz | 37 MiB | gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 610 | 78.22% |
B/32 | gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz | 398 MiB | gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 955 | 83.59% |
S/32 | gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0.npz | 118 MiB | gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz | 384 | 2154 | 79.58% |
R+Ti/16 | gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz | 40 MiB | gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz | 384 | 2426 | 75.40% |
The results from the original ViT paper (https://arxiv.org/abs/2010.11929) have
been replicated using the models from [gs://vit_models/imagenet21k
]:
model | dataset | dropout=0.0 | dropout=0.1 |
---|---|---|---|
R50+ViT-B_16 | cifar10 | 98.72%, 3.9h (A100), tb.dev | 98.94%, 10.1h (V100), |