0

my environment is windows,i want to use python to infernce with onnxruntime with openvion.after installing openvino,i build onnxruntime with openvino,my build command is

.\build.bat --update --build --build_shared_lib --build_wheel --config RelWithDebInfo --cmake_generator "Visual Studio 16 2019" --use_openvino CPU_FP32 --parallel --skip_tests

there is no error hapend in buiding. but when i import onnxruntime and use it to inference,there happand an error ,that is

[E:onnxruntime:Default, provider_bridge_ort.cc:634 onnxruntime::ProviderLibrary::Get] Failed to load library, error code: 126

and the inference speed is very slow. who can tell me why?

wwbnjs
  • 21
  • 5
  • Looks like the provider you want isn't loading at runtime. – dmedine Mar 11 '21 at 06:26
  • how to load the openvino provider in windows ? – wwbnjs Mar 12 '21 at 02:53
  • presumably if it is installed correctly, your python environment will take care of that for you, but I really couldn't say – dmedine Mar 12 '21 at 04:36
  • Inference can go slow for a number of reasons. Obviously faster providers will make it go faster, but on a CPU alone DNNs can take a long time even with small images (presuming you are running a CNN for some kind of computer vision). – dmedine Mar 12 '21 at 05:40
  • we can use onnxruntime with DIRECTML by '--use_dml' mix build to accelete inference.i build it successfuly.but when i use it to inference,happend an error "Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider."" Is it because of cpu only? – wwbnjs Mar 12 '21 at 08:57

1 Answers1

0

Did you cross-check your configs, layers and also topology? Not all of them are supported. Here is some infos.

Rommel_Intel
  • 1,369
  • 1
  • 4
  • 8