Skip to content

Code for the paper "Restoring Degraded Old Films with Recursive Recurrent Transformer Networks"

License

Notifications You must be signed in to change notification settings

mountln/RRTN-old-film-restoration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RRTN-old-film-restoration

This repository contains the code for the paper Restoring Degraded Old Films with Recursive Recurrent Transformer Networks.

Much of this code is based on the prior study Bringing Old Films Back to Life. Additionally, parts of the model code are based on BasicVSR++. For more details, please refer to the papers of these studies.

Usage

  1. Clone the repository:

    git clone https://github.com/mountln/RRTN-old-film-restoration.git
    cd RRTN-old-film-restoration
  2. Install the required dependencies:

    conda env create -f environment.yml
    conda activate rrtn
    mim install mmcv  # install mmcv

Restore Old Films

  1. Create the pretrained_models/ folder.

    mkdir pretrained_models
  2. Download the pretrained models ( rrtn_128_first.pth, rrtn_128_second.pth, raft-sintel.pth). Save them in the ./pretrained_models directory.

  3. (Optional) Remove duplicate frames. This step is optional, but some old films downloaded from the Internet may contain duplicate frames. Removing these duplicate frames can sometimes improve the results. You can use ffmpeg to do this by referring to here. There are also other methods available for removing duplicate frames.

  4. Prepare the video to be processed by following the directory structure shown in test_data_sample/. The folder structure should be as follows:

    test_data_sample
    ├── video_1
    │   ├── 00001.png
    │   ├── 00002.png
    │   ├── ...
    │   └── xxxxx.png
    ├── ...
    └── video_n
        ├── 00001.png
        ├── 00002.png
        ├── ...
        └── xxxxx.png
    
  5. Run restore.py to restore the videos. For example, to restore videos in the test_data_sample/ folder, use the following command:

    python VP_code/restore.py --name rrtn --model_name rrtn --model_path_first pretrained_models/rrtn_128_first.pth --model_path_second pretrained_models/rrtn_128_second.pth --temporal_length 30 --temporal_stride 15 --input_video_url test_data_sample

    If GPU memory is not enough, try reducing temporal_length, temporal_stride, or the resolution of the input video.

Train Models

  1. Download the noise data. Alternatively, you can download the same noise data from the links provided by Bringing Old Films Back to Life or DeepRemaster

  2. Download the REDS dataset.

  3. Modify the paths in configs/rrtn.yaml.

  4. To train the model for the first recursion, use the following command:

    python VP_code/main_gan.py --name rrtn --num_recursion 1 --epoch 30 --gpus 4
  5. To train the model for further recursion, use the following command:

    python VP_code/main_gan.py --name rrtn --num_recursion 2 --epoch 30 --gpus 4

About

Code for the paper "Restoring Degraded Old Films with Recursive Recurrent Transformer Networks"

Resources

License

Stars

Watchers

Forks

Packages

No packages published