This is the official repo for PIQ23, accepted in CVPR2023.
PIQ23 CVPR2023 / FHIQA NTIRE24 Video Poster
We present PIQ23, a portrait-specific image quality assessment dataset of 5116 images of predefined scenes acquired by more than 100 smartphones, covering a high variety of brands, models, and use cases. The dataset features individuals from a wide range of ages, genders, and ethnicities who have given explicit and informed consent for their photographs to be used in public research. It is annotated by pairwise comparisons (PWC) collected from over 30 image quality experts for three image attributes: face detail preservation, face target exposure, and overall image quality.
Important Notes
- By downloading this dataset you agree to the terms and conditions.
- All files in the PIQ23 dataset are available for non-commercial research purposes only.
- You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
- You agree to remove, throughout the life cycle of the dataset, any set of images following the request of the authors.
Dataset Access
- The PIQ23 dataset (5GB) can be downloaded from the DXOMARK CORP website.
- You need to fill the form and agree to the terms and conditions in order to request access to the dataset. We garantee open access to any individual or institution following these instructions.
- In a short time, your request will be validated and you will receive an automatic email with a temporary link in order to download the dataset.
Overview
The dataset structure is as follows:
├── Details
├── Overall
├── Exposure
├── Scores_Details.csv
├── Scores_Overall.csv
└── Scores_Exposure.csv
Each folder is associated to an attribute (Details, Overall and Exposure). It contains the images of the corresponding regions of interest with the following naming: {img_nb}_{scene_name}_{scene_idx}.{ext}.
The CSV files include the following entries:
- IMAGE PATH: relative path to the image ({Attribute}\{Image name})
- IMAGE: image name
- JOD: jod score of the image
- JOD STD: jod standard deviation
- CI LOW: lower bound of image's confidence interval
- CI HIGH: upper bound of image's confidence interval
- CI RANGE: CI HIGH - CI LOW
- QUALITY LEVEL: preliminary quality level (result of the clustering over CIs)
- CLUSTER: final quality levels (result of the variance analysis and community detection)
- TOTAL COMPARISONS: total number of comparisons for this image
- SCENE: scene name
- ATTRIBUTE: attribute (Exposure, Details or Overall)
- SCENE IDX: scene index (from 0 to 49)
- CONDITION: lighting condition (Outdoor, Indoor, Lowlight or Night)
We provide two official test splits for PIQ23:
- Device split:
- We split PIQ23 by devices, in order to test the general performance of the trained models on the given scenes.
- The test set contains around 30% of images from each scene, thus 30% of the whole dataset.
- To avoid device bias, we have carefully selected devices from different quality levels and price ranges for the test set. This split can still include some images of the test devices in the training set and vice versa since the distribution of devices per scenes is not completely uniform. We can garantee that more than 90% of the training and testing devices do not overlap.
- We first sort the devices by their median percentage of images across scenes then split them into five groups from the most common device to the least and sample from these five groups until we get around 30% of the dataset.
- The device split csv can be found in "Test split\Device Split.csv".
- The test and train csv for the different attributes can be found here "Test split\Device Split".
- Scene split:
- We split PIQ23 by scene in order to test the generalization power of the trained models.
- We have carefully chosen 15/50 scenes for the testing set, covering around 30% of the images from each condition, thus 30% of the whole dataset, around 1486/5116 images.
- To select the test set, we first sort the scenes by the percentage of images in the corresponding condition (Outdoor, Indoor, Lowlight, Night), we then select a group of scenes covering a variety of condition (framing, lighting, skin tones, etc.) until we get around 30% of images for each condition.
- The scene split csv can be found in "Test split\Scene Split.csv".
- The test and train csv for the different attributes can be found here "Test split\Scene Split".
- Examples of the test and train scenes can be found in "Test split\Scene Split\Scene examples".
An example of how to use the splits can be found in the "Test split example.ipynb" notebook.
NB:
- Please ensure to publish results on both splits in your papers.
- The paper's main results cannot be reproduced with these splits. We will be publishing official performances on these splits soon.
Note on the experiments:
- The reported results represent the median of the metrics across all scenes. Please note, that the median was used to account for outlier scenes, in case they exist.
- The models chosen as optimal in this experience are the ones who scored a maximum SROCC on the testing sets. Please take into consideration that a maximum SROCC does not reflect a maximum in other metrics.
- An optimal approach would be to choose the optimal model based on a combination of metrics.
- There should be a margin of error taken into account for these metrics. A difference of a minimal percentage in correlation can be due to multiple factors and might not be repeatable.
- The base resolution for the models is 1200; however, for HyperIQA variants, we needed to redefine the architecture of the model since it only accepts 224x224 inputs, and the new architecture accepts resolutions that are a multiple of 224, 1344 in our case.
- for HyperIQA variants, only the Resnet50 backbone is pretrained on ImageNet. There was no IQA pretraining.
Device Split | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Model\Attribute | Details | Exposure | Overall | |||||||||
SROCC | PLCC | KROCC | MAE | SROCC | PLCC | KROCC | MAE | SROCC | PLCC | KROCC | MAE | |
DBCNN (1200 x LIVEC) | 0.787 | 0.783 | 0.59 | 0.777 | 0.807 | 0.804 | 0.611 | 0.704 | 0.83 | 0.824 | 0.653 | 0.656 |
MUSIQ (1200 x PAQ2PIQ) | 0.824 | 0.831 | 0.65 | 0.627 | 0.848 | 0.859 | 0.671 | 0.585 | 0.848 | 0.837 | 0.65 | 0.626 |
HyperIQA (1344 (224*6) x No IQA pretraining) | 0.793 | 0.766 | 0.618 | 0.751 | 0.8 | 0.828 | 0.636 | 0.721 | 0.818 | 0.825 | 0.66 | 0.612 |
SEM-HyperIQA (1344 (224*6) x No IQA pretraining) | 0.854 | 0.847 | 0.676 | 0.645 | 0.826 | 0.858 | 0.65 | 0.635 | 0.845 | 0.856 | 0.674 | 0.641 |
SEM-HyperIQA-CO (1344 (224*6) x No IQA pretraining) | 0.829 | 0.821 | 0.641 | 0.697 | 0.816 | 0.843 | 0.633 | 0.668 | 0.829 | 0.843 | 0.64 | 0.624 |
SEM-HyperIQA-SO (1344 (224*6) x No IQA pretraining) | 0.874 | 0.871 | 0.709 | 0.583 | 0.826 | 0.846 | 0.651 | 0.678 | 0.84 | 0.849 | 0.661 | 0.639 |
Scene Split | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Model\Attribute | Details | Exposure | Overall | |||||||||
SROCC | PLCC | KROCC | MAE | SROCC | PLCC | KROCC | MAE | SROCC | PLCC | KROCC | MAE | |
DBCNN (1200 x LIVEC) | 0.59 | 0.51 | 0.45 | 0.99 | 0.69 | 0.69 | 0.51 | 0.91 | 0.59 | 0.64 | 0.43 | 1.04 |
MUSIQ (1200 x PAQ2PIQ) | 0.72 | 0.77 | 0.53 | 0.90 | 0.79 | 0.772 | 0.59 | 0.87 | 0.736 | 0.74 | 0.54 | 0.95 |
HyperIQA (1344 (224*6) x No IQA pretraining) | 0.701 | 0.668 | 0.504 | 0.936 | 0.692 | 0.684 | 0.498 | 0.863 | 0.74 | 0.736 | 0.55 | 0.989 |
SEM-HyperIQA (1344 (224*6) x No IQA pretraining) | 0.732 | 0.649 | 0.547 | 0.879 | 0.716 | 0.697 | 0.53 | 0.967 | 0.749 | 0.752 | 0.558 | 1.033 |
SEM-HyperIQA-CO (1344 (224*6) x No IQA pretraining) | 0.746 | 0.714 | 0.549 | 0.849 | 0.698 | 0.698 | 0.517 | 0.945 | 0.739 | 0.736 | 0.55 | 1.038 |
FULL-HyperIQA (1344 (224*6) x No IQA pretraining) | 0.74 | 0.72 | 0.55 | 0.8 | 0.76 | 0.71 | 0.57 | 0.85 | 0.78 | 0.78 | 0.59 | 1.12 |
- Add SemHyperIQA Code
- Add Stat analysis code
- Add other benchmarks code
- Add pretrained weights
Please cite the paper/dataset as follows:
@InProceedings{Chahine_2023_CVPR,
author = {Chahine, Nicolas and Calarasanu, Stefania and Garcia-Civiero, Davide and Cayla, Th\'eo and Ferradans, Sira and Ponce, Jean},
title = {An Image Quality Assessment Dataset for Portraits},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {9968-9978}
}
Provided that the user complies with the Terms of Use, the provider grants a limited, non-exclusive, personal, non-transferable, non-sublicensable, and revocable license to access, download and use the Database for internal and research purposes only, during the specified term. The User is required to comply with the Provider's reasonable instructions, as well as all applicable statutes, laws, and regulations.
For any questions please contact: [email protected]