The LibTorch inference for yolort
, both GPU and CPU are supported.
- LibTorch 1.8.0+ together with corresponding TorchVision 0.9.0+
- OpenCV
- CUDA 10.2+ [Optional]
We didn't impose too strong restrictions on the version of CUDA.
-
First, Setup the LibTorch Environment variables.
export TORCH_PATH=$(dirname $(python -c "import torch; print(torch.__file__)")) export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TORCH_PATH/lib/ # Optional
-
Don't forget to compile
LibTorchVision
using the following scripts.git clone https://github.com/pytorch/vision.git cd vision git checkout release/0.9 # Double check the version of TorchVision currently in use mkdir build && cd build # Add `-DWITH_CUDA=on` below if you're using GPU cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch -DCMAKE_INSTALL_PREFIX=./install cmake --build . cmake --install . # Setup the LibTorchVision Environment variables export TORCHVISION_PATH=$PWD/install
-
Generate
TorchScript
modelUnlike ultralytics's
torch.jit.trace
mechanism, We're usingtorch.jit.script
to trace the YOLOv5 models which containing the whole pre-processing (especially with theletterbox
ops) and post-processing (especially with thenms
ops) procedures, as such you don't need to rewrite manually the C++ codes for pre-processing and post-processing.from yolort.models import yolov5n model = yolov5n(pretrained=True) model.eval() traced_model = torch.jit.script(model) traced_model.save("yolov5n.torchscript.pt")
-
Then compile the source code.
cd deployment/libtorch mkdir build && cd build cmake .. -DTorch_DIR=$TORCH_PATH/share/cmake/Torch -DTorchVision_DIR=$TORCHVISION_PATH/share/cmake/TorchVision make
-
Now, you can infer your own images.
./yolort_torch [--input_source ../../../test/assets/zidane.jpg] [--checkpoint ../yolov5n.torchscript.pt] [--labelmap ../../../notebooks/assets/coco.names] [--gpu] # GPU switch, which is optional, and set False as default