Skip to content

Latest commit

 

History

History
31 lines (20 loc) · 1.97 KB

Freeze-PyTorch-Model.md

File metadata and controls

31 lines (20 loc) · 1.97 KB

Note: In this tutorial, we assume nnfusion cli has been installed as Build Guide.

NNFusion leverages ONNX to support PyTorch. So this tutorial focuses on how to freeze an ONNX model from PyTorch source code. You could get NNFusion supported ONNX ops here.

Freeze model by PyTorch ONNX exporter

Please refer to PyTorch onnx section to convert a PyTorch model to ONNX format, currently it already supports a great majority of deeplearning workloads.

Freeze model by NNFusion pt_freezer

On PyTorch onnx_exporter, we build a simple wrapper called pt_freezer, it wraps PyTorch onnx_exporter with control flow and op availability(not implemented yet) check. We provide a well self-explanatory VGG example for this tool:

# step0: install prerequisites
apt update && sudo apt install python3-pip
pip3 install onnx torch torchvision

# step1: freeze vgg16 model
python3 vgg16_model.py

Freeze model from thirdparties

Of course, you could freeze ONNX model from thirdparties, like huggingface transformer, which supports exporting to ONNX format.

Freezed ONNX models

model nnf codegen flags download link
VGG16 -f onnx vgg16.onnx
BERT_base -f onnx -p 'batch:3;sequence:512' pt-bert-base-cased.onnx