git clone https://github.com/andreas128/SRFlow.git && cd SRFlow && ./setup.sh
This oneliner will:
- Clone SRFlow
- Setup a python3 virtual env
- Install the packages from
requirements.txt
- Download the pretrained models
- Download the validation data
- Run the Demo Jupyter Notebook
If you want to install it manually, read the setup.sh
file. (Links to data/models, pip packages)
./run_jupyter.sh
This notebook lets you:
- Load the pretrained models.
- Super-resolve images.
- Measure PSNR/SSIM/LPIPS.
- Infer the Normalizing Flow latent space.
source myenv/bin/activate # Use the env you created using setup.sh
cd code
CUDA_VISIBLE_DEVICES=-1 python test.py ./confs/SRFlow_DF2K_4X.yml # Diverse Images 4X (Dataset Included)
CUDA_VISIBLE_DEVICES=-1 python test.py ./confs/SRFlow_DF2K_8X.yml # Diverse Images 8X (Dataset Included)
CUDA_VISIBLE_DEVICES=-1 python test.py ./confs/SRFlow_CelebA_8X.yml # Faces 8X
For testing, we apply SRFlow to the full images on CPU.
The following commands train the Super-Resolution network using Normalizing Flow in PyTorch:
source myenv/bin/activate # Use the env you created using setup.sh
cd code
python train.py -opt ./confs/SRFlow_DF2K_4X.yml # Diverse Images 4X (Dataset Included)
python train.py -opt ./confs/SRFlow_DF2K_8X.yml # Diverse Images 8X (Dataset Included)
python train.py -opt ./confs/SRFlow_CelebA_8X.yml # Faces 8X
- To reduce the GPU memory, reduce the batch size in the yml file.
- CelebA does not allow us to host the dataset. A script will follow.
The following command creates the pickel files that you can use in the yaml config file:
cd code
python prepare_data.py /path/to/img_dir
The precomputed DF2K dataset gets downloaded using setup.sh
. You can reproduce it or prepare your own dataset.
- How to train Conditional Normalizing Flow
We designed an architecture that archives state-of-the-art super-resolution quality. - How to train Normalizing Flow on a single GPU
We based our network on GLOW, which uses up to 40 GPUs to train for image generation. SRFlow only needs a single GPU for training conditional image generation. - How to use Normalizing Flow for image manipulation
How to exploit the latent space for Normalizing Flow for controlled image manipulations - See many Visual Results
Compare GAN vs Normalizing Flow yourself. We've included a lot of visuals results in our [Paper].
- Sampling: SRFlow outputs many different images for a single input.
- Stable Training: SRFlow has much fewer hyperparameters than GAN approaches, and we did not encounter training stability issues.
- Convergence: While GANs cannot converge, conditional Normalizing Flows converge monotonic and stable.
- Higher Consistency: When downsampling the super-resolution, one obtains almost the exact input.
Get a quick introduction to Normalizing Flow in our [Blog].
If you found a bug or improved the code, please do the following:
- Fork this repo.
- Push the changes to your repo.
- Create a pull request.
@inproceedings{lugmayr2020srflow,
title={SRFlow: Learning the Super-Resolution Space with Normalizing Flow},
author={Lugmayr, Andreas and Danelljan, Martin and Van Gool, Luc and Timofte, Radu},
booktitle={ECCV},
year={2020}
}