The official PyTorch implementation of SAFNet (ECCV 2024). The technology and codes related to this paper are for academic research only and no commercial use for any purpose is allowed.
Authors: Lingtong Kong, Bo Li, Yike Xiong, Hao Zhang, Hong Gu, Jinwei Chen
Multi-exposure High Dynamic Range (HDR) imaging is a challenging task when facing truncated texture and complex motion. Existing deep learning-based methods have achieved great success by either following the alignment and fusion pipeline or utilizing attention mechanism. However, the large computation cost and inference delay hinder them from deploying on resource limited devices. In this paper, to achieve better efficiency, a novel Selective Alignment Fusion Network (SAFNet) for HDR imaging is proposed. After extracting pyramid features, it jointly refines valuable area masks and cross-exposure motion in selected regions with shared decoders, and then fuses high quality HDR image in an explicit way. This approach can focus the model on finding valuable regions while estimating their easily detectable and meaningful motion. For further detail enhancement, a lightweight refine module is introduced which enjoys privileges from previous optical flow, selection masks and initial prediction. Moreover, to facilitate learning on samples with large motion, a new window partition cropping method is presented during training. Experiments on public and newly developed challenging datasets show that proposed SAFNet not only exceeds previous SOTA competitors quantitatively and qualitatively, but also runs order of magnitude faster.
The existing labeled multi-exposure HDR datasets have facilitated research in related fields. However, results of recent methods tend to be saturated due to their limited evaluative ability. We attribute this phenomenon to most of their samples having relatively small motion magnitude between LDR inputs and relatively small saturation ratio of the reference image. To probe the performance gap between different algorithms, we propose a new challenging multi-exposure HDR dataset with enhanced motion range and saturated regions. There are 96 training samples and 27 test samples in our developed Challenge123 dataset.
Dataset download link: https://huggingface.co/datasets/ltkong218/Challenge123.
To enhance the applicability of our dataset and promote future research, for each of three content-related moving scenes, we further create under-, middle- and over-exposure LDR images and corresponding HDR image. It means that for each of our 96 training scenes, we have
Training samples for our experiments in this paper:
./Training/xxx_1/ldr_img_1.tif
./Training/xxx_2/ldr_img_2.tif
./Training/xxx_3/ldr_img_3.tif
./Training/xxx_2/exposure.txt
./Training/xxx_2/hdr_img.hdr
Test samples for our experiments in this paper:
./Test/xxx_1/ldr_img_1.tif
./Test/xxx_2/ldr_img_2.tif
./Test/xxx_3/ldr_img_3.tif
./Test/xxx_2/exposure.txt
./Test/xxx_2/hdr_img.hdr
xxx
means the three digits data ID.
I am sorry that I can not release the training code of SAFNet due to requirements of my company, but the readers can try to reproduce the experimental results according to my paper.
To test PSNR-m and PSNR-l, set the right dataset path in eval_SAFNet_siggraph17.py
and eval_SAFNet_S_siggraph17.py
, and then run
$ python eval_SAFNet_siggraph17.py
$ python eval_SAFNet_S_siggraph17.py
To test SSIM-m, SSIM-l and HDR-VDP2, (1) get predicted HDR images in folder ./img_hdr_pred_siggraph17
by running $ python eval_SAFNet_siggraph17.py
; (2) put and rename the ground truth HDR test images into folder ./matlab_evaluation/img_hdr_gt_siggraph17/*
as 001.hdr, 002.hdr, ...
; (3) download file hdrvdp-2.2.2 and put the unzipped file into folder ./matlab_evaluation/hdrvdp-2.2.2
. Run matlab script
./matlab_evaluation/eval_siggraph17.m
To test PSNR-m and PSNR-l, set the right dataset path in eval_SAFNet_challenge123.py
, and then run
$ python eval_SAFNet_challenge123.py
To test SSIM-m, SSIM-l and HDR-VDP2, (1) get predicted HDR images in folder ./img_hdr_pred_challenge123
by running $ python eval_SAFNet_challenge123.py
; (2) put and rename the ground truth HDR test images into folder ./matlab_evaluation/img_hdr_gt_challenge123/*
as 001.hdr, 002.hdr, ...
; (3) download file hdrvdp-2.2.2 and put the unzipped file into folder ./matlab_evaluation/hdrvdp-2.2.2
. Run matlab script
./matlab_evaluation/eval_challenge123.m
To test running time, model parameters and computation complexity (FLOPs), you can run
$ python benchmark_SAFNet.py
$ python benchmark_SAFNet_S.py
Before, you may run pip install thop
and pip install pynvml
.
When using any parts of the Dataset, Software or the Paper in your work, please cite the following paper:
@InProceedings{Kong_2024_ECCV,
author={Kong, Lingtong and Li, Bo and Xiong, Yike and Zhang, Hao and Gu, Hong and Chen, Jinwei},
title={SAFNet: Selective Alignment Fusion Network for Efficient HDR Imaging},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2024}
}