Skip to content

Model Conversion

neosr-project edited this page Oct 18, 2024 · 12 revisions

Introduction

In this tutorial we will learn how to convert models using the neosr convert.py script.

Important

Before starting, make sure you have the dependencies installed:

pip install onnx onnxruntime-gpu onnxconverter-common onnxsim

General notes:

  • Not all networks can be converted to onnx at the moment. The Dynamo based exporter can solve that once it supports ATen PixelShuffle.

  • Some networks can only be exported using pytorch >2.1, due to recent updates in the ONNX infrastructure.

  • Complex networks can take many minutes to convert. If the script looks 'stuck', just be patient.

  • Although generally safe in most situations, conversion to fp16 doesn't guarantee identical results to fp32.

Examples

Basic usage example:

python convert.py --input model.pth -net compact --output model.onnx

The example above will take a pytorch compact model (scale 4x) and output the converted model to ONNX.

Advanced usage example:

python convert.py --input omnisr.pth -net omnisr -s 2 -window 16 -fp16 --fulloptimization --output omnisr.onnx

The example above will take model omnisr.pth of the network type omnisr, which was trained with upscaling ratio of 2x and window_size 16, convert it to fp32 and fp16 ONNX, and then optimize using onnxsim.

Options

Bellow are all options for convert.py:


--input

The --input argument specifies the input path for the pytorch model. The network must be supported by neosr. This option is required for conversion.


--output

The --input argument specifies the output path for the ONNX model. This option is required for conversion.


-net, --network

The -net argument specifies the network which the input model was trained. The network must be supported by neosr (same as type) and have the same hyper-parameters set by default in neosr. This option is required for conversion.


-s, --scale

The -s argument specifies the upscaling ratio the input model was trained on. This value should be the same as scale. Default: 4.


-static, --static

The -static argument specifies to use a static shape instead of dynamic. You must pass three separated int values. For example:

python convert.py --input model.pth -net compact -static 3 256 256 --output model.onnx

The example above will output a ONNX file with static shape of (1, 3, 256, 256).


-nocheck, --nocheck

The -no-check argument skips model checking against original pytorch. Not recommended, unless absolutely needed.


-window, --window

The -window argument specifies the window_size used on transformers. It must be the same as the input model. If you didn't specify the window_size while training you should not use this option. Default: None.


-opset, --opset

The -opset argument specifies the ONNX opset version to use. Unless you encounter some error with default, it should not be changed. Default: 17.


-fp16, --fp16

The -fp16 argument specifies to convert the full-precision model to half-precision, which increases inference speed. While this conversion is safe in most cases, it doesn't guarantee outputs identical to fp32. Default: False.


-optimize, --optimize

The -optimize argument specifies to optimize the fp32 model (and fp16, if set) using ONNX Optimizer in EXTENDED level. This is usually a safe and relatively fast optimization. Default: False


-fulloptimization, --fulloptimization

The -fulloptimization argument specifies to run onnxsim on the fp32 model (and fp16, if set). This is a costly operation and can take many minutes or hours, if the network is complex. Default: False