MogaNet: Multi-order Gated Aggregation Network (ICLR 2024)
Siyuan Li*,1,2, Zedong Wang*,1, Zicheng Liu1,2, Chen Tan1,2, Haitao Lin1,2, Di Wu1,2, Zhiyuan Chen1, Jiangbin Zheng1,2, Stan Z. Li†,1
We propose MogaNet, a new family of efficient ConvNets designed through the lens of multi-order game-theoretic interaction, to pursue informative context mining with preferable complexity-performance trade-offs. It shows excellent scalability and attains competitive results among state-of-the-art models with more efficient use of model parameters on ImageNet and multifarious typical vision benchmarks, including COCO object detection, ADE20K semantic segmentation, 2D&3D human pose estimation, and video prediction.
This repository contains PyTorch implementation for MogaNet (ICLR 2024).
Table of Contents
Catalog
We plan to release implementations of MogaNet in a few months. Please watch us for the latest release. Currently, this repo is reimplemented according to our official implementations in OpenMixup, and we are working on cleaning up experimental results and code implementations. Models are released in GitHub / Baidu Cloud / Hugging Face.
- ImageNet-1K Training and Validation Code with timm [code] [models] [Hugging Face 🤗]
- ImageNet-1K Training and Validation Code in OpenMixup / MMPretrain (TODO)
- Downstream Transfer to Object Detection and Instance Segmentation on COCO [code] [models] [demo]
- Downstream Transfer to Semantic Segmentation on ADE20K [code] [models] [demo]
- Downstream Transfer to 2D Human Pose Estimation on COCO [code] (baselines supported) [models] [demo]
- Downstream Transfer to 3D Human Pose Estimation (baseline models will be supported)
- Downstream Transfer to Video Prediction on MMNIST Variants [code] (baselines supported)
- Image Classification on Google Colab and Notebook Demo [demo]
Image Classification
1. Installation
Please check INSTALL.md for installation instructions.
2. Training and Validation
See TRAINING.md for ImageNet-1K training and validation instructions, or refer to our OpenMixup implementations. We released pre-trained models on OpenMixup in moganet-in1k-weights. We have also reproduced ImageNet results with this repo and released args.yaml
/ summary.csv
/ model.pth.tar
in moganet-in1k-weights. The parameters in the trained model can be extracted by code.
Here is a notebook demo of MogaNet which run the steps to perform inference with MogaNet for image classification.
3. ImageNet-1K Trained Models
Model | Resolution | Params (M) | Flops (G) | Top-1 / top-5 (%) | Script | Download |
---|---|---|---|---|---|---|
MogaNet-XT | 224x224 | 2.97 | 0.80 | 76.5 | 93.4 | args | script | model | log |
MogaNet-XT | 256x256 | 2.97 | 1.04 | 77.2 | 93.8 | args | script | model | log |
MogaNet-T | 224x224 | 5.20 | 1.10 | 79.0 | 94.6 | args | script | model | log |
MogaNet-T | 256x256 | 5.20 | 1.44 | 79.6 | 94.9 | args | script | model | log |
MogaNet-T* | 256x256 | 5.20 | 1.44 | 80.0 | 95.0 | config | script | model | log |
MogaNet-S | 224x224 | 25.3 | 4.97 | 83.4 | 96.9 | args | script | model | log |
MogaNet-B | 224x224 | 43.9 | 9.93 | 84.3 | 97.0 | args | script | model | log |
MogaNet-L | 224x224 | 82.5 | 15.9 | 84.7 | 97.1 | args | script | model | log |
MogaNet-XL | 224x224 | 180.8 | 34.5 | 85.1 | 97.4 | args | script | model | log |
4. Analysis Tools
(1) The code to count MACs of MogaNet variants.
python get_flops.py --model moganet_tiny
(2) The code to visualize Grad-CAM activation maps (or variants of Grad-CAM) of MogaNet and other popular architectures.
python cam_image.py --use_cuda --image_path /path/to/image.JPEG --model moganet_tiny --method gradcam