-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create CMake option onnxruntime_USE_VCPKG
#21348
Conversation
I think cmake is a great choice. Besides, we might decouple the work. You may create a vcpkg config file, then install all the packages via "vcpkg install", then let cmake search installed packages there when running ONNX Runtime's build.py script. Then in theory you don't need to modify any ONNX Runtime's cmake file. No matter which way you choose to go, we always greatly appreciate your contribution. |
36919b0 New GitHub Actions jobs for vcpkg build 6227f5f Trying to merge |
* Detect protoc path for build.py * Run vcpkg for each build.py run Reorganized vcpkg install folder and CMake build directory
onnxruntime_USE_VCPKG
* CUDA build will be tried next time... * Remove onnxruntime_vcpkg_deps.cmake
Updated note and title. Dropped CUDA feature support in this PR. |
* add dropped changes
/azp run Big Models, Linux Android Emulator QNN CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline |
/azp run Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows x64 QNN CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed |
Azure Pipelines successfully started running 9 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 9 pipeline(s). |
We probably need to wait for the next absl release. |
|
/azp run Big Models, Linux Android Emulator QNN CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline |
/azp run Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows x64 QNN CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed |
Azure Pipelines successfully started running 9 pipeline(s). |
Azure Pipelines successfully started running 8 pipeline(s). |
Hi, is there a plan for this PR? Any feedback will be welcomed. Thanks 😊 |
Sorry I was on vacation last month. Just came back. |
Probably with the latest vcpkg release. I will check it within days
…On Tue, Sep 10, 2024, 02:52 Changming Sun ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In cmake/vcpkg.json
<#21348 (comment)>
:
> + "license": "MIT",
+ "dependencies": [
+ "abseil",
+ {
+ "name": "boost-config",
+ "version>=": "1.82.0"
+ },
+ {
+ "name": "boost-mp11",
+ "version>=": "1.82.0"
+ },
+ "cpuinfo",
+ "cxxopts",
+ "date",
+ "dlpack",
+ "eigen3",
How can we avoid depending on your personal repos?
—
Reply to this email directly, view it on GitHub
<#21348 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADLSYE5XMRBU5ENOG5QU3DDZVXN6LAVCNFSM6AAAAABK2O3XMGVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDEOJQGU3TOMZYGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
* https://github.com/microsoft/vcpkg/releases/tag/2024.08.23 * remove abseil, eigen3 in configuration
* cmake: use eigen from FetchContent * ONNX will be used with FetchContent Current vcpkg upstream doesn't support onnx 1.16.0+. Here, we will mix FetchContent and vcpkg-supported find_package
/azp run Big Models, Linux Android Emulator QNN CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline |
/azp run Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CUDA CI Pipeline |
1 similar comment
/azp run Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CUDA CI Pipeline |
Azure Pipelines successfully started running 6 pipeline(s). |
Azure Pipelines successfully started running 7 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 7 pipeline(s). |
/azp run Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows x64 QNN CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline |
Azure Pipelines successfully started running 7 pipeline(s). |
Thank you! |
Thank you, @snnn. I will submit another PR if microsoft/vcpkg needs it. |
Changes
onnxruntime_USE_VCPKG
. It will be used in the vcpkg portEspecially ONNX, Protobuf, and Flatbuffers version can be different
onnxruntime_external_deps.cmake
FetchContent_Declare
to tryfind_package
.See https://cmake.org/cmake/help/latest/guide/using-dependencies/index.html
FetchContent_Declare
andFetchContent_MakeAvailable
(oronnxruntime_fetchcontent_makeavailable
) to closer lines.It was too hard to navigate the entire file to search related sections...
IMPORTED
targets like build targets (e.g.ONNX::onnx
-->onnx
)x64-windows
andx64-osx
--use_vcpkg
option: it needsCMAKE_TOOLCHAIN_FILE
with vcpkg.cmake toolchain script--compile_no_warning_as_error
is recommended because library version differences will cause unexpected compiler warningsVcpkg
for Windows and macOSSimilar to the CMakePresets.json usage
Motivation and Context
Future Works?
More feature coverage with the vcpkg supported libraries