Aydin is a user-friendly, feature-rich, and fast image denoising tool that provides a number of self-supervised, auto-tuned, and unsupervised image denoising algorithms. Aydin handles from the get-go n-dimensional array-structured images with an arbitrary number of batch dimensions, channel dimensions, and typically up to 4 spatio-temporal dimensions.
It comes with Aydin Studio a graphical user interface to easily experiment with all the different algorithms and parameters available, a command line interface to run large jobs on the terminal possibly on powerful remote machines, an API for custom coding and integration into your scripts and applications, and a napari plugin for denoising directly within the napari image viewer. More details and exhaustive explanations can be found in Aydin's documentation.
Currently Aydin supports two main families of denoisers: the first family consists of 'classical' denoising algorithms that leverage among other: frequency domain filtering, smoothness priors, low-rank representations, self-similarity, and more. The second family consists of algorithms that leverage machine learning approaches such as convolutional neural networks (CNN) or gradient boosting (GB). In the Noise2Self paper we show that it is possible to calibrate any parameterised denoising algorithm, from the few parameters of a classical algorithm to the millions of weights of a deep neural network. We leverage and extend these ideas in Aydin to provide a variety of auto-tuned and trained high-quality image denoisers. What this means is that for example, we can discover automatically the optimal parameters for non-local-means (NLM) denoising, or the best cut-off frequencies for a low-pass denoiser. These parameters are difficult to determine 'by-hand' but when auto-tuned we show (see use-cases) that you can get remarkable results even with simple 'classic' denoisers, and even be competitive against more complex and slower approaches such as deep-learning based denoisers that can also be prone to hallucination and 'copy-paste' effects. Importantly, our experience denoising many different kinds of images has shown that there is not a single 'silver-bullet' denoiser, different kinds of datasets require different approaches. Here is the list of currently available methods:
-
Low-pass filtering based algorithms:
- Butterworth denoiser (butterworth).
- Gaussian blur denoiser (gaussian).
- Gaussian-Median mixed denoiser (gm).
-
Optimisation-based with smoothness priors:
- Total-Variation denoising (tv)
- Harmonic prior (harmonic)
-
Spectral and wavelet domain:
- Spectral denoising (spectral)
- Wavelet denoising (wavelet)
- PCA denoising (pca)
-
Low-rank representations:
- Denoising via sparse decomposition (e.g. OMP) over a fixed dictionary (DCT, DST, ...)
- Denoising via sparse decomposition (e.g. OMP) over a learned dictionary (Kmeans, PCA, ICA, SDL, ...)
-
Edge-preserving:
- Bilateral denoising (bilateral)
-
Patch similarity:
- Non-Local Means denoising (nlm)
- BMnD -- Block-Matching nD denoising, a generalization of BM3D (bmnd)
-
Machine learning based:
- Noise2Self-FGR: Noise2Self denoising via Feature Generation and Regression (FGR). We use specially crafted integral features. We have several variants that leverage different regressors: CatBoost(cb), lightGBM, linear, perceptron, random-forest, support vector regression)
- Noise2Self-CNN: Noise2Self denoising via Convolutional Neural Networks (CNN) using PyTorch. This is the original approach of Noise2Self. In our experience this is typically slower to train, and more prone to hallucination and residual noise than FGR.
-
Other:
- Lipschitz continuity denoising.
Some methods actually do combine multiple ideas and so the classification above is not strict. We recommend trying first a good baseline denoiser such as the Butterworth denoiser. If unsatisfied with the result, and you have a powerful computer with many CPU cores, then we recommend you try the Noise2Self-FGR-cb denoiser. For detailed use-cases check here.
We regularly come up with new approaches and ideas, there is just not enough time to write papers about all these ideas. This means that the best 'publication' for some of these novel algorithms is this repo itself, and so please be so kind as to cite this repo for any ideas that you would use or reuse. We have a long todo list of existing, modified, as well as original algorithms that we plan to add to Aydin in the next weeks and months. We will do so progressively as time allows. Stay tuned!
Aydin's documentation can be found here.
We recommend that users that are not familiar with python start with our user-friendly UI. The latest releases can be found on the releases page. Detailed installation instructions of Aydin Studio for all three operating systems can be found here.
Aydin requires Python 3.9 or later and NumPy 2.0+. We recommend using a virtual environment:
pip install aydinThe project uses hatchling as its build backend (configured in pyproject.toml).
git clone https://github.com/royerlab/aydin.git
cd aydin
make setup # or: pip install -e ".[dev]"Run make help to see all available development commands (testing, formatting, building, etc.).
macOS: Install OpenMP support:
brew install libompYou can install Brew by following the instructions here.
Linux: Install the Qt system dependency (required by Qt 6.5+):
sudo apt install libxcb-cursor0 # Ubuntu/Debian
sudo dnf install xcb-util-cursor # Fedora/RHEL
sudo pacman -S xcb-util-cursor # Arch LinuxAydin uses PyTorch for CNN-based denoising. To enable GPU acceleration, ensure your PyTorch installation supports CUDA. See the PyTorch installation guide for platform-specific instructions.
Run Aydin without installing Python or any dependencies. Requires a container runtime:
on macOS we recommend OrbStack (brew install orbstack);
on Linux use Docker Engine (apt install docker.io).
# Denoise an image (CLI)
docker run --rm -v $(pwd):/data ghcr.io/royerlab/aydin denoise /data/image.tif
# Launch Aydin Studio GUI in your browser
docker run --rm -p 9876:9876 --shm-size=256m -v $(pwd):/data ghcr.io/royerlab/aydin-studio
# Open http://localhost:9876
# With GPU acceleration
docker run --rm --gpus all -v $(pwd):/data ghcr.io/royerlab/aydin:gpu denoise /data/image.tifSee docker/README.md for full Docker documentation including runtime setup, GPU configuration, Docker Compose, HPC/Singularity usage, and troubleshooting.
Assuming that you have installed Aydin in an environment, you can:
Start Aydin Studio from the command line with:
aydinRun the Command Line Interface (CLI) for denoising:
aydin denoise path/to/noisyimageGet help on command line usage:
aydin -hRecommended specifications are: at least 16 Gb of RAM, ideally 32 Gb, and more for very large images, a CPU with at least 4 cores, preferably 16 or more, and a recent NVIDIA graphics card such as a RTX series card. Older graphics cards could work but may cause trouble or be too slow. Aydin Studio's summary page gives an overview of the strengths and weaknesses of your machine, highlighting in red and orange items that might be problematic.
- On Ubuntu and perhaps other Linux systems, high-dpi modes tend to mess with font and UI element rendering.
Feel free to check our contributing guideline first and start discussing your new ideas and feedback with us through issues.
You can cite our work with: https://doi.org/10.5281/zenodo.5654826
