1 University of Würzburg, Germany - 2 Shanghai Jiao Tong University, China - 3 ETH Zürich, Switzerland
05/29/2024
: Added 🤗Demo.05/23/2024
: Code & ckpt & results release. Google Drive05/02/2024
: SeemoRe has been accepted at ICML 2024! 🎉02/05/2024
: Technical report released on arxiv.
Abstract
Reconstructing high-resolution (HR) images from low-resolution (LR) inputs poses a significant challenge in image super-resolution (SR). While recent approaches have demonstrated the efficacy of intricate operations customized for various objectives, the straightforward stacking of these disparate operations can result in a substantial computational burden, hampering their practical utility. In response, we introduce **S**eemo**R**e, an efficient SR model employing expert mining. Our approach strategically incorporates experts at different levels, adopting a collaborative methodology. At the macro scale, our experts address rank-wise and spatial-wise informative features, providing a holistic understanding. Subsequently, the model delves into the subtleties of rank choice by leveraging a mixture of low-rank experts. By tapping into experts specialized in distinct key factors crucial for accurate SR, our model excels in uncovering intricate intra-feature details. This collaborative approach is reminiscent of the concept of **see more**, allowing our model to achieve an optimal performance with minimal computational costs in efficient settings.Mixture of Low Rank Experts:
Visual Comparison
HR | Bicubic | SwinIR-Light | DAT-Light | SeemoRe (ours) |
---|---|---|---|---|
Create a conda enviroment:
ENV_NAME="seemore"
conda create -n $ENV_NAME python=3.10
conda activate $ENV_NAME
Run following script to install the dependencies:
bash install.sh
Pre-trained checkpoints and visual results can be downloaded here. Place the checkpoints in checkpoints/
.
In options
you can find the corresponding config files for reproducing our experiments.
For testing the pre-trained checkpoints please use following commands. Replace [TEST OPT YML]
with the path to the corresponding option file.
python basicsr/test.py -opt [TEST OPT YML]
For single-GPU training use the following commands. Replace [TRAIN OPT YML]
with the path to the corresponding option file.
torchrun --nproc_per_node=1 --master_port=4321 basicsr/train.py -opt [TRAIN OPT YML] --launcher pytorch
If you find our work helpful, please consider citing the following paper and/or ⭐ the repo.
@inproceedings{zamfir2024details,
title={See More Details: Efficient Image Super-Resolution by Experts Mining},
author={Eduard Zamfir and Zongwei Wu and Nancy Mehta and Yulun Zhang and Radu Timofte},
booktitle={International Conference on Machine Learning},
year={2024},
organization={PMLR}
}
This code is built on BasicSR.