PromptIQA: Boosting the Performance and Generalization for No-Reference Image Quality Assessment via Prompts
- The 18th European Conference on Computer Vision ECCV 2024
🚀 🚀 🚀 News:
- To be updated...
- ✅ September, 2024: We pubulish the checkpoints and testing code.
- ✅ September, 2024: We pubulish the online demo.
- ✅ March, 2024: We created this repository.
- [] Code for training
- Code for PromptIQA
- Code for testing
- Checkpoint
- Online Demo on huggingface
This is an official implementation of PromptIQA: Boosting the Performance and Generalization for No-Reference Image Quality Assessment via Prompts by Pytorch.
Due to the diversity of assessment requirements in various application scenarios for the IQA task, existing IQA methods struggle to directly adapt to these varied requirements after training. Thus, when facing new requirements, a typical approach is fine-tuning these models on datasets specifically created for those requirements. However, it is time-consuming to establish IQA datasets. In this work, we propose a Prompt-based IQA (PromptIQA) that can directly adapt to new requirements without fine-tuning after training. On one hand, it utilizes a short sequence of Image-Score Pairs (ISP) as prompts for targeted predictions, which significantly reduces the dependency on the data requirements. On the other hand, PromptIQA is trained on a mixed dataset with two proposed data augmentation strategies to learn diverse requirements, thus enabling it to effectively adapt to new requirements. Experiments indicate that the PromptIQA outperforms SOTA methods with higher performance and better generalization.
Figure1: The framework of the proposed PromptIQA.
Click 👇 to try our demo online.
The dependencies for this work as follows:
einops==0.7.0
numpy==1.24.4
opencv_python==4.8.0.76
openpyxl==3.1.2
Pillow==10.0.0
scipy
timm==0.5.4
torch==2.0.1+cu118
torchvision==0.15.2+cu118
tqdm==4.66.1
gradio
You can also run the following command to install the environment directly:
pip install -r requirements.txt
You can get our pretraining weight from Huggingface.
Then put the checkpoints in ./PromptIQA/checkpoints
You can use the following command to run the test demo:
python3 app.py
You can use the following command to run the testing code:
python3 test.py
We achieved state-of-the-art performance on most IQA datasets simultaniously within one single model.
More detailed results can be found in the paper.
Individual Dataset Comparison.
If our work is useful to your research, we will be grateful for you to cite our paper:
@article{chen2024promptiqa,
title={PromptIQA: Boosting the Performance and Generalization for No-Reference Image Quality Assessment via Prompts},
author={Chen, Zewen and Qin, Haina and Wang, Juan and Yuan, Chunfeng and Li, Bing and Hu, Weiming and Wang, Liang},
journal={arXiv preprint arXiv:2403.04993},
year={2024}
}
We sincerely thank the great work HyperIQA, MANIQA and MoCo. The code structure is partly based on their open repositories.