Official Pytorch implementation for the "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" paper (ICML 2024).
π Project | π Paper | πΌοΈ Poster
Authors: Dachun Kaiπ§οΈ, Jiayao Lu, Yueyi Zhangπ§οΈ, Xiaoyan Sun, University of Science and Technology of China
Feel free to ask questions. If our work helps, please don't hesitate to give us a β!
- Provide a script for inference on the user's own video
- 2024/07/02: Release the colab file for a quick test
- 2024/06/28: Release details to prepare datasets
- 2024/06/08: Publish docker image
- 2024/06/08: Release pretrained models and test sets for quick testing
- 2024/06/07: Video demos released
- 2024/05/25: Initialize the repository
- 2024/05/02: π π Our paper was accepted in ICML'2024
A
Vid4_City.mp4
Vid4_Foliage.mp4
REDS_000.mp4
REDS_011.mp4
-
Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.
-
Run in Conda
conda create -y -n evtexture python=3.7 conda activate evtexture pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl git clone https://github.com/DachunKai/EvTexture.git cd EvTexture && pip install -r requirements.txt && python setup.py develop
-
Run in Docker π
Note: before running the Docker image, make sure to install nvidia-docker by following the official instructions.
[Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.
docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest
[Option 2] We also provide a Dockerfile that you can use to build the image yourself.
cd EvTexture && docker build -t evtexture ./docker
The pulled or self-built Docker image containes a complete conda environment named
evtexture
. After running the image, you can mount your data and operate within this environment.source activate evtexture && cd EvTexture && python setup.py develop
-
Download the pretrained models from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)) and place them to
experiments/pretrained_models/EvTexture/
. The network architecture code is in evtexture_arch.py.-
EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for
$4\times$ SR scale. -
EvTexture_Vimeo90K_BIx4.pth: trained on Vimeo-90K dataset with BI degradation for
$4\times$ SR scale.
-
EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for
-
Download the preprocessed test sets (including events) for REDS4 and Vid4 from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)), and place them to
datasets/
.-
Vid4_h5: HDF5 files containing preprocessed test datasets for Vid4.
-
REDS4_h5: HDF5 files containing preprocessed test datasets for REDS4.
-
-
Run the following command:
- Test on Vid4 for 4x VSR:
./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
- Test on REDS4 for 4x VSR:
This will generate the inference results in
./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
results/
. The output results on REDS4 and Vid4 can be downloaded from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)).
- Test on Vid4 for 4x VSR:
-
Both video and event data are required as input, as shown in the snippet. We package each video and its event data into an HDF5 file.
-
Example: The structure of
calendar.h5
file from the Vid4 dataset is shown below.calendar.h5 βββ images β βββ 000000 # frame, ndarray, [H, W, C] β βββ ... βββ voxels_f β βββ 000000 # forward event voxel, ndarray, [Bins, H, W] β βββ ... βββ voxels_b β βββ 000000 # backward event voxel, ndarray, [Bins, H, W] β βββ ...
-
To simulate and generate the event voxels, refer to the dataset preparation details in DataPreparation.md.
π οΈ We are developing a convenient script to allow users to quickly use our EvTexture model to upscale their own videos. However, our spare time is limited, so please stay tuned!
If you find the code and pre-trained models useful for your research, please consider citing our paper. π
@inproceedings{kai2024evtexture,
title={{E}v{T}exture: {E}vent-driven {T}exture {E}nhancement for {V}ideo {S}uper-{R}esolution},
author={Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan},
booktitle={Proceedings of the 41st International Conference on Machine Learning},
pages={22817--22839},
year={2024},
volume={235},
publisher={PMLR}
}
If you meet any problems, please describe them in issues or contact:
- Dachun Kai: [email protected]
This project is released under the Apache-2.0 license. Our work is built upon BasicSR, which is an open source toolbox for image/video restoration tasks. Thanks to the inspirations and codes from RAFT, event_utils and EvTexture-jupyter.