pythae
This library implements some of the most common (Variational) Autoencoder models under a unified implementation. In particular, it provides the possibility to perform benchmark experiments and comparisons by training the models with the same autoencoding neural network architecture. The feature make your own autoencoder allows you to train any of these models with your own data and own Encoder and Decoder neural networks. It integrates experiment monitoring tools such wandb, mlflow or comet-ml 🧪 and allows model sharing and loading from the HuggingFace Hub 🤗 in a few lines of code.
News 📢
As of v0.1.0, Pythae
now supports distributed training using PyTorch's DDP. You can now train your favorite VAE faster and on larger datasets, still with a few lines of code.
See our speed-up benchmark.
Quick access:
- Installation
- Implemented models / Implemented samplers
- Reproducibility statement / Results flavor
- Model training / Data generation / Custom network architectures / Distributed training
- Model sharing with 🤗 Hub / Experiment tracking with
wandb
/ Experiment tracking withmlflow
/ Experiment tracking withcomet_ml
- Tutorials / Documentation
- Contributing 🚀 / Issues 🛠️
- Citing this repository
Installation
To install the latest stable release of this library run the following using pip
$ pip install pythae
To install the latest github version of this library run the following using pip
$ pip install git+https://github.com/clementchadebec/benchmark_VAE.git
or alternatively you can clone the github repo to access to tests, tutorials and scripts.
$ git clone https://github.com/clementchadebec/benchmark_VAE.git
and install the library
$ cd benchmark_VAE
$ pip install -e .
Available Models
Below is the list of the models currently implemented in the library.
Models | Training example | Paper | Official Implementation |
---|---|---|---|
Autoencoder (AE) | |||
Variational Autoencoder (VAE) | link | ||
Beta Variational Autoencoder (BetaVAE) | link | ||
VAE with Linear Normalizing Flows (VAE_LinNF) | link | ||
VAE with Inverse Autoregressive Flows (VAE_IAF) | link | link | |
Disentangled Beta Variational Autoencoder (DisentangledBetaVAE) | link | ||
Disentangling by Factorising (FactorVAE) | link | ||
Beta-TC-VAE (BetaTCVAE) | link | link | |
Importance Weighted Autoencoder (IWAE) | link | link | |
Multiply Importance Weighted Autoencoder (MIWAE) | link | ||
Partially Importance Weighted Autoencoder (PIWAE) | link | ||
Combination Importance Weighted Autoencoder (CIWAE) | link | ||
VAE with perceptual metric similarity (MSSSIM_VAE) | link | ||
Wasserstein Autoencoder (WAE) | link | link | |
Info Variational Autoencoder (INFOVAE_MMD) | link | ||
VAMP Autoencoder (VAMP) | link | link | |
Hyperspherical VAE (SVAE) | link | link | |
Poincaré Disk VAE (PoincareVAE) | link | link | |
Adversarial Autoencoder (Adversarial_AE) | link | ||
Variational Autoencoder GAN (VAEGAN) 🥗 | link | link | |
Vector Quantized VAE (VQVAE) | link | link | |
Hamiltonian VAE (HVAE) | link | link | |
Regularized AE with L2 decoder param (RAE_L2) | link | link | |
Regularized AE with gradient penalty (RAE_GP) | link | link | |
Riemannian Hamiltonian VAE (RHVAE) | link | link | |
Hierarchical Residual Quantization (HRQVAE) | link | link |
See reconstruction and generation results for all aforementionned models
Available Samplers
Below is the list of the models currently implemented in the library.