-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PaddlePaddle export and inference #9240
Conversation
Test on Yolov5 DockerEnviroment with paddlepaddle-gpu v2.2 Signed-off-by: Katteria <[email protected]>
for more information, see https://pre-commit.ci
@kisaragychihaya thanks for the PR! Let us take a look about the best way to integrate Paddle Paddle exports. I'm not sure if index 5 is appropriate or if we should place this after TF exports. |
@kisaragychihaya ok we definitely want to move forward with PaddlePaddle export, but first I need to restructure a few parts of YOLOv5 to accomodate new formats, so I'll make these changes in the coming days in separate PRs. In the meantime what I need from you is 3 things:
Every format that we export to except TF.js can be run directly with detect.py, val.py, and PyTorch Hub, so I need the PaddlePaddle loading and inference example code from you here to implement this in DetectMultiBackend() class. @AyushExel I'm going to add placeholders for PaddlePaddle models and NCNN models. Can you think of any other formats that we may want to add in the future? I'd rather insert all possible formats now with placeholders so that the structure is there and future users might contribute. |
Signed-off-by: Glenn Jocher <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
for more information, see https://pre-commit.ci
@kisaragychihaya ok I've cleaned up Paddle export so it works correctly and left placeholders for you to insert Paddle model loading code and Paddle model inference code. Please add code to these sections to support loading and inference of YOLOv5 Paddle models: Paddle model load: Paddle model inference: EDIT: Converted export from ONNX2Paddle to PyTorch2Paddle, simpler and more direct. |
Signed-off-by: Glenn Jocher <[email protected]>
@glenn-jocher No, I can't think of any other formats. This looks good |
@AyushExel I looked at Tencent NCNN. They have an intermediary called PNNX that's like ONNX but PyTorch-only which they claim makes it light and faster, but they have no pip package installation method yet for their converters and it's a bit more complicated. I think I'll just do Paddle here and and leave it at that for the near-medium term. |
Signed-off-by: Katteria <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Katteria <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
@kisaragychihaya thanks for the updates! I cleaned it up a bit and can confirm that detect.py and val.py now work with Paddle models! I noticed that CPU inference is quite slow though, about 500 ms here vs 200 ms for torch. Is there a way to improve CPU speeds, or if not then to utilize the GPU, i.e. when --device 0 is passed the Paddle models will still execute on CPU currently. Do you know how to run inference with them on |
Using GPU Version
CUDA11.6
|
Signed-off-by: Glenn Jocher <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Glenn Jocher <[email protected]>
Signed-off-by: Glenn Jocher <[email protected]>
@kisaragychihaya thanks for the updates! I updated for paddlepaddle-gpu auto-install if CUDA is requested and supported. I reached out to the Paddle team for a review, but everything looks good to me. |
@kisaragychihaya PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐ @kalenmike @AlanDimmer we have a new PaddlePaddle export format we need to add to HUB! |
Test on Yolov5 DockerEnviroment with paddlepaddle-gpu v2.2
Signed-off-by: Katteria [email protected]
Usage
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Support for PaddlePaddle export added to YOLOv5.
📊 Key Changes
yaml
import.metadata
parameter for consistency.export_paddle
function defined to enable PaddlePaddle exporting.common.py
to allow loading and running models exported in PaddlePaddle format.benchmarks.py
to include PaddlePaddle as a new, currently unsupported format for inference benchmarking.🎯 Purpose & Impact