-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the current provider used in C++ inference stage? #4140
Comments
After building ORT with openvino enabled, you must call CreateExecutionProviderFactory_OpenVINO() on the session object to actually use openvino before calling Run(). The order in which the execution providers are set on the session object determines the preference in which they're used within ORT during the graph partitioning stage. GetExecutionProviderType is not relevant here unless you want to use custom ops. |
@pranavsharma Thank you very much for your reply, I used the : But simply adding this command, my model inference core dump. [WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.nchwc' not recognized by nGraph My model is 3*LSMT layers, with input (batch_size, input_len, feat_dim). My build command: ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --use_openvino |
cc @jywu-msft |
did you follow these build instructions: https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#openvino ? |
I just downloaded the laest master branch using git. Should I use older version? which branch is best choice? https://github.com/openvinotoolkit/openvino/branches when I build onnxruntime, everything run well, the tests were passed successfully. |
you can use rel-1.3.0 branch |
I donot understand. I downloaded openvino by git, and then following the "build-instruction.md" in the dir. the command are just : That is not the right way? Should I download files like "l_openvino_toolkit_p_2020.3.194.tgz" and follow this instruction https://docs.openvinotoolkit.org/2020.2/_docs_install_guides_installing_openvino_linux.html ? |
I got similar issue like @Liujingxiu23 .
ONNXRuntime commit id: 541eafb The inference speed is even slower than default CPU(Eigen). |
Can you share repro steps for any core dumps? |
@Godricly How do you use openvino? |
@jywu-msft The log showed up for each image we infered. I'm not sure if the image shape changes during the inference. But all subsequent inference was slower. Another issue to mention is the renaming of libovep_ngraph.so. Now I have to create a soft link into the lib under the openvino. @Liujingxiu23 yes, I remember they also provide a network based install method. And you need to add some environment stuff in bashrc. You can try their demo first. |
@Godricly You mention having to rename libovep_ngraph.so. why did you have to rename it? Though OpenVINO comes with an nGraph library, it is not fully usable directly from ONNX Runtime due to the differences in protobuf versions and its dependencies. That is why we build a separate and minimal 'ovep_ngraph.so' library along with ONNX Runtime that contains just the functions directly called by ONNX Runtime code with matching dependencies and toolchain configuration. Note that the full libngraph.so that comes with OpenVINO is still requried by OpenVINO libraries internally, so it should not be disturbed. If the ovep_ngraph.so is removed, and the pre-built libngraph.so from OpenVINO installer is used directly, then the ABI differences due to dependency and toolchain mismatches can cause crashes. |
The built library can not found the so file when we link onnxruntime to code. That's why. The coding was working but still slow. |
The built ovep_ngraph.so library should be found within the onnxruntime's build folder in this path: build/Linux/Release/external/ngraph/lib. Replace 'Release' with other configs if you have chosen them as your build config. |
@smkarlap I use the original builed libngraph.so and libovep_ngraph.so both. The back-trace info of the core dump is as follows, can you help me figure out the problem: |
@smkarlap Replacing ngraph following your instruction makes no different. |
Thanks for the stack trace. Can you share the model that is causing this error so that we can reproduce it on our end? |
It not model Specific problem, I tried resnet18 model, core dump happens also. |
@smkarlap After some discussion with Intel, the speed issue might be caused by my dynamic input size. |
@Liujingxiu23, you don't need to build OpenVINO by yourself. You can download the official pre-built OpenVINO and install it WITHOUT sudo permission in a local directory. Wherever you may install it, you would just need to run the 'setupvars.sh' shell script located within the <installation_directory>/bin directory to set certain environment variables. ONNXRuntime will just pick them up and link against the OpenVINO installed in the local directory. |
@Godricly , if a new input shape is used the binary representation of the model for the specific hardware needs to be re-created for the new shape, and hence the latency in the initial run. However, once a binary is created for a specific shape, it is cached within and will be re-used for successive runs and so will no longer take the performance hit. |
@smkarlap Thankyou! I'll try! |
@smkarlap I rebuild onnxruntime-ret-1.3.0 using l_openvino_toolkit_p_2020.2.120.tgz follows what you said , model predict can be done successfully. Maybe core dump happened because I used wrong way of building openvino or wrong release version of onnxruntime. Thank you very much for your help! |
I compiled openvino and onnxruntime from source code.
onnxruntime was build with --use_openvino. Build done successfully and *.so generated.
when I run command "grep -r "CreateExecutionProviderFactory_OpenVINO" *"in the my libdir, shows:
Binary file libonnxruntime.so matches
Binary file libonnxruntime.so.1.3.0 matches
Does the complie success == the openvino is used?
How can I check whether the current provider is OpenVINO, is there any C/C++ api like "get_the_current_provider"?
And How can I clearly specified the provider I used. is there any C/C++ api like "set_provider"?
I see this function "GetExecutionProviderType", is this usefull for me? how to use it ? Is there is any example?
The text was updated successfully, but these errors were encountered: