Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the current provider used in C++ inference stage? #4140

Closed
Liujingxiu23 opened this issue Jun 5, 2020 · 23 comments
Closed

How to get the current provider used in C++ inference stage? #4140

Liujingxiu23 opened this issue Jun 5, 2020 · 23 comments

Comments

@Liujingxiu23
Copy link

Liujingxiu23 commented Jun 5, 2020

I compiled openvino and onnxruntime from source code.
onnxruntime was build with --use_openvino. Build done successfully and *.so generated.

when I run command "grep -r "CreateExecutionProviderFactory_OpenVINO" *"in the my libdir, shows:
Binary file libonnxruntime.so matches
Binary file libonnxruntime.so.1.3.0 matches

Does the complie success == the openvino is used?
How can I check whether the current provider is OpenVINO, is there any C/C++ api like "get_the_current_provider"?
And How can I clearly specified the provider I used. is there any C/C++ api like "set_provider"?

I see this function "GetExecutionProviderType", is this usefull for me? how to use it ? Is there is any example?

@pranavsharma
Copy link
Contributor

After building ORT with openvino enabled, you must call CreateExecutionProviderFactory_OpenVINO() on the session object to actually use openvino before calling Run(). The order in which the execution providers are set on the session object determines the preference in which they're used within ORT during the graph partitioning stage.

GetExecutionProviderType is not relevant here unless you want to use custom ops.

@Liujingxiu23
Copy link
Author

@pranavsharma Thank you very much for your reply, I used the :
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_OpenVINO(session_options, ""));
to set openvino as the provider.
flows "https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/OpenVINO-ExecutionProvider.md".

But simply adding this command, my model inference core dump.

[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.nchwc' not recognized by nGraph
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft' not recognized by nGraph
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.preview.training' not recognized by nGraph
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.training' not recognized by nGraph
[WARN] 2020-06-08T03:09:14z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.ml' not recognized by nGraph
Segmentation fault (core dumped)

My model is 3*LSMT layers, with input (batch_size, input_len, feat_dim).

My build command: ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --use_openvino
Build was success, and tests during build done successfully.

@pranavsharma
Copy link
Contributor

cc @jywu-msft

@jywu-msft
Copy link
Member

did you follow these build instructions: https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#openvino ?
and you installed OpenVINO Release 2020.2 ?

@Liujingxiu23
Copy link
Author

I just downloaded the laest master branch using git. Should I use older version? which branch is best choice? https://github.com/openvinotoolkit/openvino/branches

when I build onnxruntime, everything run well, the tests were passed successfully.

@jywu-msft
Copy link
Member

I just downloaded the laest master branch using git. Should I use older version? which branch is best choice? https://github.com/openvinotoolkit/openvino/branches

when I build onnxruntime, everything run well, the tests were passed successfully.

you can use rel-1.3.0 branch
my question was whether you downloaded and installed OpenVINO 2020.2 per the instructions?
your initial comment mentioned compiling openvino from source..

@Liujingxiu23
Copy link
Author

Liujingxiu23 commented Jun 9, 2020

I donot understand.

I downloaded openvino by git, and then following the "build-instruction.md" in the dir. the command are just :
git submodule update --init --recursive; ./install_dependencies.sh, mkdir build && cd build; cmake ...

That is not the right way? Should I download files like "l_openvino_toolkit_p_2020.3.194.tgz" and follow this instruction https://docs.openvinotoolkit.org/2020.2/_docs_install_guides_installing_openvino_linux.html ?
I see the install command is "sudo ./install.sh" but I do not have sudo authority

@Godricly
Copy link

Godricly commented Jun 9, 2020

I got similar issue like @Liujingxiu23 .

[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.preview.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.nchwc' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.ml' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.preview.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.nchwc' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.ml' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.preview.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.nchwc' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.ml' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
[WARN] 2020-06-09T09:54:48z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:49z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.preview.training' not recognized by nGraph
[WARN] 2020-06-09T09:54:49z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.nchwc' not recognized by nGraph
[WARN] 2020-06-09T09:54:49z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft' not recognized by nGraph
[WARN] 2020-06-09T09:54:49z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.ml' not recognized by nGraph
[WARN] 2020-06-09T09:54:49z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
[WARN] 2020-06-09T09:54:49z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190	Domain 'ai.onnx.training' not recognized by nGraph

ONNXRuntime commit id: 541eafb
OpenVINO Version: /opt/intel/openvino_2020.2.120/

The inference speed is even slower than default CPU(Eigen).

@jywu-msft
Copy link
Member

Can you share repro steps for any core dumps?
Warning log messages can be ignored.
re: slow inference speed, if you exclude the first inference, is it still slower than default CPU?
Adding some other folks
+@suryasidd , @smkarlap

@Liujingxiu23
Copy link
Author

@Godricly How do you use openvino?
Did you download "l_openvino_toolkit_p_2020.3.194.tgz" and follow this instruction https://docs.openvinotoolkit.org/2020.2/_docs_install_guides_installing_openvino_linux.html ?

@Godricly
Copy link

@jywu-msft The log showed up for each image we infered. I'm not sure if the image shape changes during the inference. But all subsequent inference was slower.

Another issue to mention is the renaming of libovep_ngraph.so. Now I have to create a soft link into the lib under the openvino.

@Liujingxiu23 yes, I remember they also provide a network based install method. And you need to add some environment stuff in bashrc. You can try their demo first.

@smkarlap
Copy link
Contributor

@Godricly You mention having to rename libovep_ngraph.so. why did you have to rename it?

Though OpenVINO comes with an nGraph library, it is not fully usable directly from ONNX Runtime due to the differences in protobuf versions and its dependencies. That is why we build a separate and minimal 'ovep_ngraph.so' library along with ONNX Runtime that contains just the functions directly called by ONNX Runtime code with matching dependencies and toolchain configuration. Note that the full libngraph.so that comes with OpenVINO is still requried by OpenVINO libraries internally, so it should not be disturbed.

If the ovep_ngraph.so is removed, and the pre-built libngraph.so from OpenVINO installer is used directly, then the ABI differences due to dependency and toolchain mismatches can cause crashes.

@Godricly
Copy link

Godricly commented Jun 11, 2020

The built library can not found the so file when we link onnxruntime to code. That's why. The coding was working but still slow.

@smkarlap
Copy link
Contributor

The built ovep_ngraph.so library should be found within the onnxruntime's build folder in this path: build/Linux/Release/external/ngraph/lib. Replace 'Release' with other configs if you have chosen them as your build config.

@Liujingxiu23
Copy link
Author

@smkarlap I use the original builed libngraph.so and libovep_ngraph.so both. The back-trace info of the core dump is as follows, can you help me figure out the problem:
For the gdb backtrace:
Starting program: /data/liujingxiu/Torch-Vs-ONNX/Personal-Model/test_onnxruntime-openvino/example-app
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib64/libthread_db.so.1".
2020-06-11 09:50:53.228775700 [W:onnxruntime:test, abi_session_options.cc:147 SetIntraOpNumThreads] Since openmp is enabled in this build, this API cannot be used to configure intra op num threads. Please use the openmp environment variables to control the number of threads.
2020-06-11 09:50:53.254344213 [W:onnxruntime:, graph.cc:2619 CleanUnusedInitializers] Removing initializer 'similarity_bias'. It is not used by any node and should be removed from the model.
2020-06-11 09:50:53.254374796 [W:onnxruntime:, graph.cc:2619 CleanUnusedInitializers] Removing initializer 'similarity_weight'. It is not used by any node and should be removed from the model.
num_input_nodes: 1
Input 0 : name=inputs
Input 0 : type=1
Input 0 : num_dims=3
Input 0 : dim 0=-1
Input 0 : dim 1=-1
Input 0 : dim 2=40
======================= Before Run ======================
[WARN] 2020-06-11T01:50:53z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.nchwc' not recognized by nGraph
[WARN] 2020-06-11T01:50:53z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft' not recognized by nGraph
[WARN] 2020-06-11T01:50:53z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
[WARN] 2020-06-11T01:50:53z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.preview.training' not recognized by nGraph
[WARN] 2020-06-11T01:50:53z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.training' not recognized by nGraph
[WARN] 2020-06-11T01:50:53z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.ml' not recognized by nGraph
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff757437c in ngraph::op::v0::Parameter::Parameter(ngraph::element::Type const&, ngraph::PartialShape const&, bool) ()
from /data/liujingxiu/Torch-Vs-ONNX/Personal-Model/test_onnxruntime-openvino/libonnxruntime/onnxruntime/lib/libngraph.so
Missing separate debuginfos, use: debuginfo-install glibc-2.17-222.el7.x86_64
(gdb) bt
#0 0x00007ffff757437c in ngraph::op::v0::Parameter::Parameter(ngraph::element::Type const&, ngraph::PartialShape const&, bool) ()
from /data/liujingxiu/Torch-Vs-ONNX/Personal-Model/test_onnxruntime-openvino/libonnxruntime/onnxruntime/lib/libngraph.so
#1 0x00007ffff5201b4d in construct<ngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> (__p=, this=) at /usr/include/c++/5/ext/new_allocator.h:120
#2 construct<ngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> (
__p=, __a=...) at /usr/include/c++/5/bits/alloc_traits.h:530
#3 _Sp_counted_ptr_inplace<ngraph::element::Type const&, ngraph::PartialShape const&> (__a=...,
this=0x85c5c0) at /usr/include/c++/5/bits/shared_ptr_base.h:522
#4 __shared_count<ngraph::op::v0::Parameter, std::allocatorngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> (__a=..., this=0x7fffffff8ac8)
at /usr/include/c++/5/bits/shared_ptr_base.h:617
#5 __shared_ptr<std::allocatorngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> (__a=..., __tag=..., this=0x7fffffff8ac0) at /usr/include/c++/5/bits/shared_ptr_base.h:1097
#6 shared_ptr<std::allocatorngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> (__a=..., __tag=..., this=0x7fffffff8ac0) at /usr/include/c++/5/bits/shared_ptr.h:319
#7 allocate_shared<ngraph::op::v0::Parameter, std::allocatorngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> (__a=...) at /usr/include/c++/5/bits/shared_ptr.h:620
#8 make_shared<ngraph::op::v0::Parameter, ngraph::element::Type const&, ngraph::PartialShape const&> ()
at /usr/include/c++/5/bits/shared_ptr.h:636
#9 get_ng_parameter (this=0x7fffffff8b20)
at /home/personal/work/liujingxiu/onnxruntime/build/Linux/RelWithDebInfo/ngraph/src/project_ngraph/src/ngraph/frontend/onnx_import/core/value_info.hpp:102
#10 get_ng_node (initializers=..., parameters=..., this=0x7fffffff8b20)
at /home/personal/work/liujingxiu/onnxruntime/build/Linux/RelWithDebInfo/ngraph/src/project_ngraph/src/ngraph/frontend/onnx_import/core/value_info.hpp:95
#11 ngraph::onnx_import::Graph::Graph (this=0x7fffffff8e70, graph_proto=..., model=...)
at /home/personal/work/liujingxiu/onnxruntime/build/Linux/RelWithDebInfo/ngraph/src/project_ngraph/src/ngraph/frontend/onnx_import/core/graph.cpp:127
#12 0x00007ffff51fe8b0 in ngraph::onnx_import::import_onnx_model (stream=...)
at /home/personal/work/liujingxiu/onnxruntime/build/Linux/RelWithDebInfo/ngraph/src/project_ngraph/src/ngraph/frontend/onnx_import/onnx.cpp:73
#13 0x00007ffff666e132 in onnxruntime::openvino_ep::backend_utils::CreateCNNNetwork (model_proto=...,
device_id=..., precision=...)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/providers/openvino/backend_utils.cc:51
#14 0x00007ffff6661a91 in onnxruntime::openvino_ep::BasicBackend::BasicBackend (this=0x850b70,
model_proto=..., global_context=..., subgraph_context=...)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/providers/openvino/backends/basic_backend.cc:29
#15 0x00007ffff665a9c7 in construct<onnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__p=,
---Type to continue, or q to quit---
this=) at /usr/include/c++/5/ext/new_allocator.h:120
#16 construct<onnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__p=, __a=...)
at /usr/include/c++/5/bits/alloc_traits.h:530
#17 _Sp_counted_ptr_inplace<onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__a=..., this=0x850b60)
at /usr/include/c++/5/bits/shared_ptr_base.h:522
#18 __shared_count<onnxruntime::openvino_ep::BasicBackend, std::allocatoronnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__a=..., this=) at /usr/include/c++/5/bits/shared_ptr_base.h:617
#19 __shared_ptr<std::allocatoronnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__a=..., __tag=...,
this=) at /usr/include/c++/5/bits/shared_ptr_base.h:1097
#20 shared_ptr<std::allocatoronnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__a=..., __tag=...,
this=) at /usr/include/c++/5/bits/shared_ptr.h:319
#21 allocate_shared<onnxruntime::openvino_ep::BasicBackend, std::allocatoronnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> (__a=...) at /usr/include/c++/5/bits/shared_ptr.h:620
#22 make_shared<onnxruntime::openvino_ep::BasicBackend, onnx::ModelProto const&, onnxruntime::openvino_ep::GlobalContext&, onnxruntime::openvino_ep::SubGraphContext const&> ()
at /usr/include/c++/5/bits/shared_ptr.h:636
#23 onnxruntime::openvino_ep::BackendFactory::MakeBackend (model_proto=..., global_context=...,
subgraph_context=...)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/providers/openvino/backends/backend_factory.cc:21
#24 0x00007ffff665966f in onnxruntime::openvino_ep::BackendManager::Compute (this=0x87c820, api=...,
context=0x7fffffffbb50)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/providers/openvino/backend_manager.cc:264
#25 0x00007ffff6649499 in operator() (__closure=, context=,
api=, state=)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/providers/openvino/openvino_execution_provider.cc:967
#26 std::_Function_handler<onnxruntime::common::Status(void*, const OrtApi*, OrtKernelContext*), onnxruntime::OpenVINOExecutionProvider::Compile(const std::vectoronnxruntime::Node*&, std::vectoronnxruntime::NodeComputeInfo&)::<lambda(onnxruntime::FunctionState, const OrtApi*, OrtKernelContext*)> >::_M_invoke(const std::_Any_data &, <unknown type in /data/liujingxiu/Torch-Vs-ONNX/Personal-Model/test_onnxruntime-openvino/libonnxruntime/onnxruntime/lib/libonnxruntime.so.1.3.0, CU 0x91dc04, DIE 0xa01281>, <unknown type in /data/liujingxiu/Torch-Vs-ONNX/Personal-Model/test_onnxruntime-openvino/libonnxruntime/onnxruntime/lib/libonnxruntime.so.1.3.0, CU 0x91dc04, DIE 0xa01286>, <unknown type in /data/liujingxiu/Torch-Vs-ONNX/Personal-Model/test_onnxruntime-openvino/libonnxruntime/onnxruntime/lib/libonnxruntime.so.1.3.0, CU 0x91dc04, DIE 0xa0128b>) (__functor=..., __args#0=,
__args#1=, __args#2=) at /usr/include/c++/5/functional:1857
---Type to continue, or q to quit---
#27 0x00007ffff6a1a1f0 in operator() (__args#2=0x7fffffffbb50, __args#1=0x7ffff70f9f00 <ort_api_1_to_3>,
__args#0=0x84a510, this=0x87f530) at /usr/include/c++/5/functional:2267
#28 onnxruntime::FunctionKernel::Compute (this=0x87f4e0, context=0x7fffffffbb50)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/framework/func_kernel.h:41
#29 0x00007ffff6a651a7 in onnxruntime::SequentialExecutor::Execute(onnxruntime::SessionState const&, std::vector<int, std::allocator > const&, std::vector<OrtValue, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<OrtValue, std::allocator >&, std::unordered_map<unsigned long, std::function<onnxruntime::common::Status (onnxruntime::TensorShape const&, OrtMemoryInfo const&, OrtValue&, bool&)>, std::hash, std::equal_to, std::allocator<std::pair<unsigned long const, std::function<onnxruntime::common::Status (onnxruntime::TensorShape const&, OrtMemoryInfo const&, OrtValue&, bool&)> > > > const&, onnxruntime::logging::Logger const&) (this=0x84c2e0, session_state=...,
feed_mlvalue_idxs=..., feeds=..., fetch_mlvalue_idxs=..., fetches=..., fetch_allocators=..., logger=...)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/framework/sequential_executor.cc:271
#30 0x00007ffff6a521f7 in onnxruntime::utils::ExecuteGraphImpl(const onnxruntime::SessionState &, const onnxruntime::FeedsFetchesManager &, const std::vector<OrtValue, std::allocator > &, std::vector<OrtValue, std::allocator > &, const std::unordered_map<long unsigned int, std::function<onnxruntime::common::Status(const onnxruntime::TensorShape&, const OrtMemoryInfo&, OrtValue&, bool&)>, std::hash, std::equal_to, std::allocator<std::pair<long unsigned int const, std::function<onnxruntime::common::Status(const onnxruntime::TensorShape&, const OrtMemoryInfo&, OrtValue&, bool&)> > > > &, ExecutionMode, const bool &, const onnxruntime::logging::Logger &, bool) (session_state=...,
feeds_fetches_manager=..., feeds=..., fetches=..., fetch_allocators=..., execution_mode=ORT_SEQUENTIAL,
terminate_flag=@0x7fffffffdd48: false, logger=..., only_execute_path_to_fetches=false)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/framework/utils.cc:479
#31 0x00007ffff6a53a8a in onnxruntime::utils::ExecuteGraph (session_state=..., feeds_fetches_manager=...,
feeds=..., fetches=..., execution_mode=ORT_SEQUENTIAL, terminate_flag=@0x7fffffffdd48: false,
logger=..., only_execute_path_to_fetches=false)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/framework/utils.cc:539
#32 0x00007ffff6633601 in onnxruntime::InferenceSession::Run (this=this@entry=0x82f7f0, run_options=...,
feed_names=..., feeds=..., output_names=..., p_fetches=0x7fffffffdcc0)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/session/inference_session.cc:1163
#33 0x00007ffff66062f1 in OrtApis::Run (sess=0x82f7f0, run_options=0x0, input_names=,
input=, input_len=, output_names1=, output_names_len=1,
output=0x7fffffffde10)
at /home/personal/work/liujingxiu/onnxruntime/onnxruntime/core/session/onnxruntime_c_api.cc:503
#34 0x0000000000404e0b in main ()

@Godricly
Copy link

@smkarlap Replacing ngraph following your instruction makes no different.

@smkarlap
Copy link
Contributor

Thanks for the stack trace. Can you share the model that is causing this error so that we can reproduce it on our end?

@Liujingxiu23
Copy link
Author

It not model Specific problem, I tried resnet18 model, core dump happens also.
I did not build openvino following the offical instruction, cuase I do not have the sudo authority.
I guess this is the cause of core dump. My build of openvino may have some problem.
Thank you @smkarlap

@Godricly
Copy link

@smkarlap After some discussion with Intel, the speed issue might be caused by my dynamic input size.

@smkarlap
Copy link
Contributor

@Liujingxiu23, you don't need to build OpenVINO by yourself. You can download the official pre-built OpenVINO and install it WITHOUT sudo permission in a local directory. Wherever you may install it, you would just need to run the 'setupvars.sh' shell script located within the <installation_directory>/bin directory to set certain environment variables. ONNXRuntime will just pick them up and link against the OpenVINO installed in the local directory.

@smkarlap
Copy link
Contributor

@Godricly , if a new input shape is used the binary representation of the model for the specific hardware needs to be re-created for the new shape, and hence the latency in the initial run. However, once a binary is created for a specific shape, it is cached within and will be re-used for successive runs and so will no longer take the performance hit.

@Liujingxiu23
Copy link
Author

@smkarlap Thankyou! I'll try!

@Liujingxiu23
Copy link
Author

@smkarlap I rebuild onnxruntime-ret-1.3.0 using l_openvino_toolkit_p_2020.2.120.tgz follows what you said , model predict can be done successfully. Maybe core dump happened because I used wrong way of building openvino or wrong release version of onnxruntime. Thank you very much for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants