The library is faster than other libraries on most of the transformations.
Based on numpy, OpenCV, imgaug picking the best from each of them.
Simple, flexible API that allows the library to be used in any computer vision pipeline.
Large, diverse set of transformations.
Easy to extend the library to wrap around other libraries.
Easy to extend to other tasks.
Supports transformations on images, masks, key points and bounding boxes.
Supports python 2.7-3.7
Easy integration with PyTorch.
Easy transfer from torchvision.
Was used to get top results in many DL competitions at Kaggle, topcoder, CVPR, MICCAI.
Written by Kaggle Masters.
How to use
All in one showcase notebook
-
showcase.ipynb
Classification
-
example.ipynb
Object detection
-
example_bboxes.ipynb
Non-8-bit images
-
example_16_bit_tiff.ipynb
Image segmentation
example_kaggle_salt.ipynb
Keypoints
example_keypoints.ipynb
Custom targets
example_multi_target.ipynb
Weather transforms
example_weather_transforms.ipynb
You can use this
Google Colaboratory notebook
to adjust image augmentation parameters and see the resulting images.
pip install albumentations
If you want to get the latest version of the code before it is released on PyPI you can install the library from GitHub:
pip install -U git+https://github.com/albu/albumentations
And it also works in Kaggle GPU kernels (proof)
!pip install albumentations > /dev/null
Conda
To install albumentations using conda we need first to install imgaug
with pip
pip install imgaug
conda install albumentations -c albumentations
Documentation
The full documentation is available at albumentations.readthedocs.io.
Pixel-level transforms
Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:
CLAHE
ChannelShuffle
Cutout
FromFloat
GaussNoise
GaussianBlur
HueSaturationValue
IAAAdditiveGaussianNoise
IAAEmboss
IAASharpen
IAASuperpixels
InvertImg
JpegCompression
MedianBlur
MotionBlur
Normalize
RGBShift
RandomBrightness
RandomBrightnessContrast
RandomContrast
RandomFog
RandomGamma
RandomRain
RandomShadow
RandomSnow
RandomSunFlare
ToFloat
ToGray
Spatial-level transforms
Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.
Transform
Image
Masks
BBoxes
Keypoints
Migrating from torchvision to albumentations
Migrating from torchvision to albumentations is simple - you just need to change a few lines of code.
Albumentations has equivalents for common torchvision transforms as well as plenty of transforms that are not presented in torchvision.
migrating_from_torchvision_to_albumentations.ipynb
shows how one can migrate code from torchvision to albumentations.
Benchmarking results
To run the benchmark yourself follow the instructions in benchmark/README.md
Results for running the benchmark on first 2000 images from the ImageNet validation set using an Intel Core i7-7800X CPU.
The table shows how many images per second can be processed on a single core, higher is better.
imgaug
0.2.8
torchvision (Pillow backend)
0.2.2.post3
torchvision (Pillow-SIMD backend)
0.2.2.post3
Keras
2.2.4
Augmentor
0.2.3
solt
0.1.5
Python and library versions: Python 3.6.8 | Anaconda, numpy 1.16.2, pillow 5.4.1, pillow-simd 5.3.0.post0, opencv-python 4.0.0.21, scikit-image 0.14.2, scipy 1.2.1.
Contributing
Clone the repository:
git clone [email protected]:albu/albumentations.git
cd albumentations
Install the library in development mode:
pip install -e .[tests]
Run tests:
pytest
Run flake8 to perform PEP8 and PEP257 style checks and to check code for lint errors.
flake8
Adding new transforms
If you are contributing a new transformation, make sure to update "Pixel-level transforms" or/and "Spatial-level transforms" sections of this file (README.md
). To do this, simply run (with python3 only):
python3 tools/make_transforms_docs.py make
and copy/paste the results in the corresponding sections. To validate your modifications, you
can run:
python3 tools/make_transforms_docs.py check README.md
Building the documentation
Go to docs/
directory
cd docs
Install required libraries
pip install -r requirements.txt
Build html files
make html
Open _build/html/index.html
in browser.
Alternatively, you can start a web server that rebuilds the documentation
automatically when a change is detected by running make livehtml
Comments
In some systems, in the multiple GPU regime PyTorch may deadlock the DataLoader if OpenCV was compiled with OpenCL optimizations. Adding the following two lines before the library import may help. For more details https://github.com/pytorch/pytorch/issues/1355
cv2.setNumThreads(0)
cv2.ocl.setUseOpenCL(False)
Citing
If you find this library useful for your research, please consider citing:
@article{2018arXiv180906839B,
author = {A. Buslaev, A. Parinov, E. Khvedchenya, V.~I. Iglovikov and A.~A. Kalinin},
title = "{Albumentations: fast and flexible image augmentations}",
journal = {ArXiv e-prints},
eprint = {1809.06839},
year = 2018
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
"PyPI", "Python Package Index", and the blocks logos are registered trademarks of the Python Software Foundation.
© 2024 Python Software Foundation
Site map