Skip to content

Easy inference for video networks. For supports SOFVSR (traiNNer Version) in DML

Notifications You must be signed in to change notification settings

FNsi/Video-Inference-directml

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Easy Video Inference

This repository is an inference repo similar to that of the ESRGAN inference repository, but for various video machine learning models. The idea is to allow anyone to easily run various models on video without having to worry about different repo setups. PRs welcome.

Currently supported architectures

  • SOFVSR (traiNNer Version)
    • Original SOFVSR SR net
    • RRDB SR net ? no clue.

Wheels needed : torch==2.4.1 torch-directml==0.2.5.dev240914 numpy==2.1.3 progressbar==2.5

For the rest wheels needed, just install the last will be ok.

Bug needed to fix: --chop_forward. memory leak during gpu-cpu switch.

no clue to know how dml release memory😅 so that would not be useful unless for small amout pic cycle.

Below is original description.


Additional features

  • Automatic scale, number of frames, number of channels, and SR architecture detection
  • Automatic 'HD' RIFE model detection
  • Automatic beginning and end frame padding so all frames get included in output
  • Direct video input and output through ffmpeg
  • FP16 support for faster inference on RTX cards

Using this repo

Requirements: numpy, opencv-python, pytorch, progressbar2

Optional requirements: ffmpeg-python to use video input/output (requires ffmpeg to be installed)

Obtaining models

SOFVSR

RIFE

  • Converted .pth files: 1.3 | 1.4 | 1.5 (HD)
  • Model conversion script located in utils folder

TecoGAN

Upscaling exported frames

  • Place exported video frames in the input folder
  • Place model in the models folder
  • Example: python run.py ./models/video_model.pth

Upscaling video files

  • Place model in the models folder
  • Set --input to your input video
  • Set --output to your output video
  • Example: python run.py ./models/video_model.pth --input "./input/input_video.mp4" --output "./output/output_video.mp4"

Extra flags

  • --input: Specifies input directory or file
  • --output: Specifies output directory or file
  • --denoise: Denoises the chroma layer
  • --chop_forward: Splits tensors to avoid out-of-memory errors
  • --crf: The crf (quality) of the output video when using video input/output. Defaults to 0 (lossless)
  • --exp: RIFE exponential interpolation amount
  • --fp16: Speedup on RTX cards using HalfTensors

Planned architecture support

  • EDVR (modified)
  • RRN
  • Updated RIFE models
  • Deep Video Deinterlacing

Planned additional features

  • More FFMPEG options
  • Model chaining
  • Will probably modify this repository to also run image models such as ESRGAN

About

Easy inference for video networks. For supports SOFVSR (traiNNer Version) in DML

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%