Skip to content

DachunKai/EvTexture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

57 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

PWC PWC

Official Pytorch implementation for the "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" paper (ICML 2024).

🌐 Project | πŸ“ƒ Paper | πŸ–ΌοΈ Poster

Authors: Dachun KaiπŸ“§οΈ, Jiayao Lu, Yueyi ZhangπŸ“§οΈ, Xiaoyan Sun, University of Science and Technology of China

Feel free to ask questions. If our work helps, please don't hesitate to give us a ⭐!

πŸš€ News

  • Provide a script for inference on the user's own video
  • 2024/07/02: Release the colab file for a quick test
  • 2024/06/28: Release details to prepare datasets
  • 2024/06/08: Publish docker image
  • 2024/06/08: Release pretrained models and test sets for quick testing
  • 2024/06/07: Video demos released
  • 2024/05/25: Initialize the repository
  • 2024/05/02: πŸŽ‰ πŸŽ‰ Our paper was accepted in ICML'2024

πŸ”– Table of Content

  1. Video Demos
  2. Code
  3. Citation
  4. Contact
  5. License and Acknowledgement

πŸ”₯ Video Demos

A $4\times$ upsampling results on the Vid4 and REDS4 test sets.

Vid4_City.mp4
Vid4_Foliage.mp4
REDS_000.mp4
REDS_011.mp4

Code

Installation

  • Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.

  • Run in Conda

    conda create -y -n evtexture python=3.7
    conda activate evtexture
    pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl
    pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl
    git clone https://github.com/DachunKai/EvTexture.git
    cd EvTexture && pip install -r requirements.txt && python setup.py develop
  • Run in Docker πŸ‘

    Note: before running the Docker image, make sure to install nvidia-docker by following the official instructions.

    [Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.

    docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest

    [Option 2] We also provide a Dockerfile that you can use to build the image yourself.

    cd EvTexture && docker build -t evtexture ./docker

    The pulled or self-built Docker image containes a complete conda environment named evtexture. After running the image, you can mount your data and operate within this environment.

    source activate evtexture && cd EvTexture && python setup.py develop

Test

  1. Download the pretrained models from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)) and place them to experiments/pretrained_models/EvTexture/. The network architecture code is in evtexture_arch.py.

    • EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for $4\times$ SR scale.
    • EvTexture_Vimeo90K_BIx4.pth: trained on Vimeo-90K dataset with BI degradation for $4\times$ SR scale.
  2. Download the preprocessed test sets (including events) for REDS4 and Vid4 from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)), and place them to datasets/.

    • Vid4_h5: HDF5 files containing preprocessed test datasets for Vid4.

    • REDS4_h5: HDF5 files containing preprocessed test datasets for REDS4.

  3. Run the following command:

    • Test on Vid4 for 4x VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
    • Test on REDS4 for 4x VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
      This will generate the inference results in results/. The output results on REDS4 and Vid4 can be downloaded from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)).

Data Preparation

  • Both video and event data are required as input, as shown in the snippet. We package each video and its event data into an HDF5 file.

  • Example: The structure of calendar.h5 file from the Vid4 dataset is shown below.

    calendar.h5
    β”œβ”€β”€ images
    β”‚   β”œβ”€β”€ 000000 # frame, ndarray, [H, W, C]
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ voxels_f
    β”‚   β”œβ”€β”€ 000000 # forward event voxel, ndarray, [Bins, H, W]
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ voxels_b
    β”‚   β”œβ”€β”€ 000000 # backward event voxel, ndarray, [Bins, H, W]
    β”‚   β”œβ”€β”€ ...
    
  • To simulate and generate the event voxels, refer to the dataset preparation details in DataPreparation.md.

Inference on your own video

πŸ› οΈ We are developing a convenient script to allow users to quickly use our EvTexture model to upscale their own videos. However, our spare time is limited, so please stay tuned!

😊 Citation

If you find the code and pre-trained models useful for your research, please consider citing our paper. πŸ˜ƒ

@inproceedings{kai2024evtexture,
  title={{E}v{T}exture: {E}vent-driven {T}exture {E}nhancement for {V}ideo {S}uper-{R}esolution},
  author={Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan},
  booktitle={Proceedings of the 41st International Conference on Machine Learning},
  pages={22817--22839},
  year={2024},
  volume={235},
  publisher={PMLR}
}

Contact

If you meet any problems, please describe them in issues or contact:

License and Acknowledgement

This project is released under the Apache-2.0 license. Our work is built upon BasicSR, which is an open source toolbox for image/video restoration tasks. Thanks to the inspirations and codes from RAFT, event_utils and EvTexture-jupyter.