Looking for a person who would like to help me maintain this repository! Contact me on LN or simply add a PR!
Data augmentation
List of useful data augmentation resources. You will find here some links to more or less popular github repos :sparkles:, libraries, papers :books: and other information.
Do you like it? Feel free to :star: ! Feel free to make a pull request!
Featured ⭐
Data augmentation for bias mitigation?
- Targeted Data Augmentation for bias mitigation; Agnieszka Mikołajczyk-Bareła, Maria Ferlin, Michał Grochowski; The development of fair and ethical AI systems requires careful consideration of bias mitigation, an area often overlooked or ignored. In this study, we introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA), which leverages classical data augmentation techniques to tackle the pressing issue of bias in data and models. Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance. (...)
Introduction
Data augmentation can be simply described as any method that makes our dataset larger by making modified copies of the existing dataset. To create more images for example, we could zoom in and save the result, we could change the brightness of the image or rotate it. To get a bigger sound dataset we could try to raise or lower the pitch of the audio sample or slow down/speed up. Example data augmentation techniques are presented in the diagram below.
DATA AUGMENTATION
-
Images augmentation
- Affine transformations
- Rotation
- Scaling
- Random cropping
- Reflection
- Elastic transformations
- Contrast shift
- Brightness shift
- Blurring
- Channel shuffle
- Advanced transformations
- Random erasing
- Adding rain effects, sun flare...
- Image blending
- Neural-based transformations
- Adversarial noise
- Neural Style Transfer
- Generative Adversarial Networks
- Affine transformations
-
Audio augmentation
- Noise injection
- Time shift
- Time stretching
- Random cropping
- Pitch scaling
- Dynamic range compression
- Simple gain
- Equalization
- Voice conversion (Speech)
-
Natural Language Processing augmentation
- Thesaurus
- Text Generation
- Back Translation
- Word Embeddings
- Contextualized Word Embeddings
- Paraphrasing
- Text perturbation
-
Time Series Data Augmentation
- Basic approaches
- Warping
- Jittering
- Perturbing
- Advanced approaches
- Embedding space
- GAN/Adversarial
- RL/Meta-Learning
- Basic approaches
-
Graph Augmentation
- Node/edge dropping
- Node/edge addition (graph modification)
- Edge perturbation
-
Gene expression Augmentation
- Data generation with GANs
- Mixing observations
- Random variable insertion
-
Automatic Augmentation (AutoAugment)
- Other
- Keypoints/landmarks Augmentation - usually done with image augmentation (rotation, reflection) or graph augmentation methods (node/edge dropping)
- Spectrograms/Melspectrograms - usually done with time series data augmentation (jittering, perturbing, warping) or image augmentation (random erasing)
If you wish to cite us, you can cite the following paper of your choice: Style transfer-based image synthesis as an efficient regularization technique in deep learning or Data augmentation for improving deep learning in image classification problem.
Repositories
Computer vision
- albumentations is a Python library with a set of useful, large, and diverse data augmentation methods. It offers over 30 different types of augmentations, easy and ready to use. Moreover, as the authors prove, the library is faster than other libraries on most of the transformations.
Example Jupyter notebooks:
- All in one showcase notebook
- Classification,
- Object detection, image segmentation and keypoints
- Others - Weather transforms , Serialization, Replay/Deterministic mode, Non-8-bit images
Example transformations:
- imgaug - is another very useful and widely used Python library. As the author describes: it helps you with augmenting images for your machine learning projects. It converts a set of input images into a new, much larger set of slightly altered images. It offers many augmentation techniques such as affine transformations, perspective transformations, contrast changes, gaussian noise, dropout of regions, hue/saturation changes, cropping/padding, and blurring.
Example Jupyter notebooks:
- Load and Augment an Image
- Multicore Augmentation
- Augment and work with: Keypoints/Landmarks, Bounding Boxes, Polygons, Line Strings, Heatmaps, Segmentation Maps
Example transformations:
- Kornia - is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions.
At a granular level, Kornia is a library that consists of the following components:
Component | Description |
---|---|
kornia | a Differentiable Computer Vision library, with strong GPU support |
kornia.augmentation | a module to perform data augmentation in the GPU |
kornia.color | a set of routines to perform color space conversions |
kornia.contrib | a compilation of user contributed and experimental operators |
kornia.enhance | a module to perform normalization and intensity transformation |
kornia.feature | a module to perform feature detection |
kornia.filters | a module to perform image filtering and edge detection |
kornia.geometry | a geometric computer vision library to perform image transformations, 3D linear algebra and conversions using different camera models |
kornia.losses | a stack of loss functions to solve different vision tasks |
kornia.morphology | a module to perform morphological operations |
kornia.utils | image to tensor utilities and metrics for vision problems |
- UDA - a simple data augmentation tool for image files, intended for use with machine learning data sets. The tool scans a directory containing image files, and generates new images by performing a specified set of augmentation operations on each file that it finds. This process multiplies the number of training examples that can be used when developing a neural network, and should significantly improve the resulting network's performance, particularly when the number of training examples is relatively small.
The details are available here: UNSUPERVISED DATA AUGMENTATION FOR CONSISTENCY TRAINING
- Data augmentation for object detection - Repository contains a code for the paper space tutorial series on adapting data augmentation methods for object detection tasks. They support a lot of data augmentations, like Horizontal Flipping, Scaling, Translation, Rotation, Shearing, Resizing.
- FMix - Understanding and Enhancing Mixed Sample Data Augmentation This repository contains the official implementation of the paper 'Understanding and Enhancing Mixed Sample Data Augmentation'
- Super-AND - This repository is the Pytorch implementation of "A Comprehensive Approach to Unsupervised Embedding Learning based on AND Algorithm.
- vidaug - This Python library helps you with augmenting videos for your deep learning architectures. It converts input videos into a new, much larger set of slightly altered videos.
- Image augmentor - This is a simple Python data augmentation tool for image files, intended for use with machine learning data sets. The tool scans a directory containing image files, and generates new images by performing a specified set of augmentation operations on each file that it finds. This process multiplies the number of training examples that can be used when developing a neural network, and should significantly improve the resulting network's performance, particularly when the number of training examples is relatively small.
- torchsample - this python package provides High-Level Training, Data Augmentation, and Utilities for Pytorch. This toolbox provides data augmentation methods, regularizers and other utility functions. These transforms work directly on torch tensors:
- Compose()
- AddChannel()
- SwapDims()
- RangeNormalize()
- StdNormalize()
- Slice2D()
- RandomCrop()
- SpecialCrop()
- Pad()
- RandomFlip()
- Random erasing - The code is based on the paper: https://arxiv.org/abs/1708.04896. The Abstract:
In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: this https URL.