Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PaddlePaddle export and inference #9240

Merged
merged 38 commits into from
Sep 10, 2022
Merged

Add PaddlePaddle export and inference #9240

merged 38 commits into from
Sep 10, 2022

Conversation

kisaragychihaya
Copy link
Contributor

@kisaragychihaya kisaragychihaya commented Sep 1, 2022

Test on Yolov5 DockerEnviroment with paddlepaddle-gpu v2.2

Signed-off-by: Katteria [email protected]

Usage

Export:          python export.py --weights yolov5s.pt --include paddle

Detect:          python detect.py --weights yolov5s_paddle_model/ 
Validate:        python val.py --weights yolov5s_paddle_model/ 
PyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s_paddle_model/')

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Support for PaddlePaddle export added to YOLOv5.

📊 Key Changes

  • 🚀 PaddlePaddle export option included in allowed export formats.
  • 🧹 Removed unused yaml import.
  • 🛠 Fixed OpenVINO export and added metadata parameter for consistency.
  • 💾 New export_paddle function defined to enable PaddlePaddle exporting.
  • 📦 Updates in common.py to allow loading and running models exported in PaddlePaddle format.
  • 📈 Modifying benchmarks.py to include PaddlePaddle as a new, currently unsupported format for inference benchmarking.

🎯 Purpose & Impact

  • 🎨 Allows YOLOv5 models to be exported to PaddlePaddle, expanding the range of supported platforms and potential users.
  • 🛠 Improves code modularity and maintenance by removing unnecessary code and enhancing function signatures for consistency.
  • 🌍 Provides support for one of the popular machine learning frameworks from China, potentially increasing usability for developers in the Asian market.
  • ⚖️ While PaddlePaddle export is now possible, inference benchmarking does not support it yet, indicating future work to fully integrate this feature.

kisaragychihaya and others added 2 commits September 1, 2022 10:02
Test on Yolov5 DockerEnviroment with paddlepaddle-gpu v2.2

Signed-off-by: Katteria <[email protected]>
@glenn-jocher
Copy link
Member

@kisaragychihaya thanks for the PR! Let us take a look about the best way to integrate Paddle Paddle exports. I'm not sure if index 5 is appropriate or if we should place this after TF exports.

@kalenmike

@glenn-jocher
Copy link
Member

@kisaragychihaya ok we definitely want to move forward with PaddlePaddle export, but first I need to restructure a few parts of YOLOv5 to accomodate new formats, so I'll make these changes in the coming days in separate PRs. In the meantime what I need from you is 3 things:

  • Export to PaddlePaddle format code (already done in this PR).
  • Load model code for loading PaddlePaddle models.
  • Inference code for running PaddlePaddle model prediction.

Every format that we export to except TF.js can be run directly with detect.py, val.py, and PyTorch Hub, so I need the PaddlePaddle loading and inference example code from you here to implement this in DetectMultiBackend() class.

@AyushExel I'm going to add placeholders for PaddlePaddle models and NCNN models. Can you think of any other formats that we may want to add in the future? I'd rather insert all possible formats now with placeholders so that the structure is there and future users might contribute.

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 3, 2022

@kisaragychihaya ok I've cleaned up Paddle export so it works correctly and left placeholders for you to insert Paddle model loading code and Paddle model inference code.

Screenshot 2022-09-03 at 23 06 09

Please add code to these sections to support loading and inference of YOLOv5 Paddle models:

Paddle model load:
https://github.com/kisaragychihaya/yolov5/blob/83c76cd917d1661702f395aab78e3cbb066c68d2/models/common.py#L450-L451

Paddle model inference:
https://github.com/kisaragychihaya/yolov5/blob/83c76cd917d1661702f395aab78e3cbb066c68d2/models/common.py#L508-L509

EDIT: Converted export from ONNX2Paddle to PyTorch2Paddle, simpler and more direct.

@glenn-jocher glenn-jocher added the TODO High priority items label Sep 3, 2022
@AyushExel
Copy link
Contributor

@glenn-jocher No, I can't think of any other formats. This looks good

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 4, 2022

@AyushExel I looked at Tencent NCNN. They have an intermediary called PNNX that's like ONNX but PyTorch-only which they claim makes it light and faster, but they have no pip package installation method yet for their converters and it's a bit more complicated. I think I'll just do Paddle here and and leave it at that for the near-medium term.

@glenn-jocher
Copy link
Member

@kisaragychihaya thanks for the updates! I cleaned it up a bit and can confirm that detect.py and val.py now work with Paddle models!

Screenshot 2022-09-06 at 19 25 49

I noticed that CPU inference is quite slow though, about 500 ms here vs 200 ms for torch. Is there a way to improve CPU speeds, or if not then to utilize the GPU, i.e. when --device 0 is passed the Paddle models will still execute on CPU currently. Do you know how to run inference with them on device when it is --device 0?

@kisaragychihaya
Copy link
Contributor Author

@kisaragychihaya thanks for the updates! I cleaned it up a bit and can confirm that detect.py and val.py now work with Paddle models!

Screenshot 2022-09-06 at 19 25 49

I noticed that CPU inference is quite slow though, about 500 ms here vs 200 ms for torch. Is there a way to improve CPU speeds, or if not then to utilize the GPU, i.e. when --device 0 is passed the Paddle models will still execute on CPU currently. Do you know how to run inference with them on device when it is --device 0?

Using GPU Version
CUDA11.2

python -m pip install paddlepaddle-gpu==2.3.2.post112 -f https://www.paddlepaddle.org.cn/whl/windows/mkl/avx/stable.html

CUDA11.6

python -m pip install paddlepaddle-gpu==2.3.2.post116 -f https://www.paddlepaddle.org.cn/whl/windows/mkl/avx/stable.html

@glenn-jocher glenn-jocher changed the title Add PaddlePaddle Model Export Add PaddlePaddle export and inference Sep 7, 2022
@glenn-jocher
Copy link
Member

@kisaragychihaya thanks for the updates! I updated for paddlepaddle-gpu auto-install if CUDA is requested and supported.

I reached out to the Paddle team for a review, but everything looks good to me.

@glenn-jocher glenn-jocher removed the TODO High priority items label Sep 10, 2022
@glenn-jocher glenn-jocher merged commit e3e5122 into ultralytics:master Sep 10, 2022
@glenn-jocher
Copy link
Member

@kisaragychihaya PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐

@kalenmike @AlanDimmer we have a new PaddlePaddle export format we need to add to HUB!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants