Zhenbo Song *, Zhenyuan Zhang *, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan, Wenqi Ren, Jianfeng Lu
Keywords: reflection removal, adversarial attack
Abstracts: This paper addresses the problem of robust deep single-image reflection removal (SIRR) against adversarial attacks. Current deep learning-based SIRR methods have shown significant performance degradation due to unnoticeable distortions and perturbations on input images. For a comprehensive robustness study, we first conduct diverse adversarial attacks specifically for the SIRR problem, i.e. towards different attacking targets and regions. Then we propose a robust SIRR model, which integrates the cross-scale attention module, the multi-scale fusion module, and the adversarial image discriminator. By exploiting the multi-scale mechanism, the model narrows the gap between features from clean and adversarial images. The image discriminator adaptively distinguishes clean or noisy inputs, and thus further gains reliable robustness. Extensive experiments on Nature, SIR2, and Real datasets demonstrate that our model remarkably improves the robustness of SIRR across disparate scenes.
🌟 If RobustSIRR is helpful to your images or projects, please help star this repo. Thanks! 🤗
- Python >= 3.8.5
- PyTorch >= 1.11
- CUDA >= 11.3
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/ZhenboSong/RobustSIRR.git
cd RobustSIRR
# create new anaconda env
conda create -n sirr python=3.8 -y
conda activate sirr
# install python dependencies by pip
pip install -r requirements.txt
🌟 Download the pre-trained RobustSIRR models from [Pre-trained_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie] to the checkpoints
folder.
- 7,643 cropped images with size 224 × 224 from Pascal VOC dataset (image ids are provided in VOC2012_224_train_png.txt, you should crop the center region with size 224 x 224 to reproduce our result )
- 90(89) real-world training images from Berkeley real dataset
❗ Place the processed VOC2012 and real datasets in the datasets
folder, and name them VOC2012
and real89
respectively.
🌟For convenience, you can directly download the prepared training dataset from [ VOC2012_For_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie] and [ real89_For_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie].
- 20 real testing images from Berkeley real dataset
- Three sub-datasets, namely ‘Objects’, ‘Postcard’, ‘Wild’ from SIR2 dataset
- 20 testing images from Nature
❗ Place the processed datasets in the datasets
folder, and name them as real20
, SIR2
, and nature20
respectively.
🌟For convenience, you can directly download the prepared testing dataset from [ TestingDataset_For_RobustSIRR_BaiduYunDisk (pwd:sirr)
, Google Drvie].
The hierarchical structure of all datasets is illustrated in the following diagram.
datasets
├── nature20
│ ├── blended
│ └── transmission_layer
├── real20
│ ├── blended
│ ├── real_test.txt
│ └── transmission_layer
├── real89
│ ├── blended
│ └── transmission_layer
├── SIR2
│ ├── PostcardDataset
│ │ ├── blended
│ │ ├── reflection
│ │ └── transmission_layer
│ ├── SolidObjectDataset
│ │ ├── blended
│ │ ├── reflection
│ │ └── transmission_layer
│ └── WildSceneDataset
│ ├── blended
│ ├── reflection
│ └── transmission_layer
└── VOC2012
├── blended
├── JPEGImages
├── reflection_layer
├── reflection_mask_layer
├── transmission_layer
└── VOC_results_list.json
Note:
transmission_layer
is GT,blended
is Input, andreflection/reflection_layer
is the reflection part- For the SIR^2 dataset, we only standardize the folder structure
- For adv. training:
# To Be Released
- For clean images training:
# ours_cvpr
CUDA_VISIBLE_DEVICES=0 python train.py --name ours --gpu_id 0 --no-verbose --display_id -1 --batchSize 4
# ours_wo_aid
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aid --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aid
# ours_wo_aff
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_aff --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_aff
# ours_wo_scm
CUDA_VISIBLE_DEVICES=0 python train.py --name ours_wo_scm --gpu_id 0 --no-verbose --display_id -1 --batchSize 4 --wo_scm
Note:
- Check
options/robustsirr/train_options.py
to see more training options.
CUDA_VISIBLE_DEVICES=0 python test.py --name ours_cvpr --hyper --gpu_ids 0 -r --no-verbose --save_gt --save_attack --save_results
# To Be Released
# Due to confidentiality concerns. Alternatively, you can refer to https://github.com/yuyi-sd/Robust_Rain_Removal
☝️ Comparison of the PSNR values with respect to perturbation levels
☝️ Comparison of different training strategies on three benchmark datasets. ‘w/’ and ‘w/o adv.’ mean training with or without adversarial images. MSE and LPIPS denote corresponding attacks over Full regions. ↓ and ↑ represent the degradation and improvement performance compared to the original prediction inputting clean images.
If our work is useful for your research, please consider citing:
@InProceedings{Song_2023_CVPR,
author = {Song, Zhenbo and Zhang, Zhenyuan and Zhang, Kaihao and Luo, Wenhan and Fan, Zhaoxin and Ren, Wenqi and Lu, Jianfeng},
title = {Robust Single Image Reflection Removal Against Adversarial Attacks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {24688-24698}
}
-
This project is based on ERRNet
-
Some codes are brought from BasicSR, Robust_Rain_Removal
-
This Readme is inspired by CodeFormer and FedFed
-
For More Awesome SIRR methods, you can refer to 👍 Awesome-SIRR
If you have any questions, please feel free to reach out at [email protected]
or [email protected]
.