-1

When I run inference using device « MULTI:CPU,MYRIAD » (on Python or wih benchmark_app) I get the same inference time than with « CPU » only. I get the same problem with two Myriad devices : « MULTI:MYRIAD1,MYRIAD2 » I get the same results than with one Myriad : « MYRIAD ».

Do you know how to resolve this pb ? Thanks for your help :)

  • 1
    Welcome to [Stack Overflow.](https://stackoverflow.com/ "Stack Overflow"). For us to help you, provide a minimal reproducible problem set containing sample input, expected output, actual output, and all relevant code necessary to reproduce the problem. What you have provided falls short of this goal. See [Minimal Reproducible Example](https://stackoverflow.com/help/minimal-reproducible-example "Minimal Reproducible Example") for details. – itprorh66 May 02 '22 at 14:26

1 Answers1

0

Enumerate your available devices using the OpenVINO Hello Query Device Sample to ensure they actually exist and are usable.

Please note that certain device combinations would result in performance caveats, as these devices share the power, bandwidth, and other resources. For example, in combining CPU and GPU, it is recommended to enable the GPU throttling (which saves another CPU thread for the CPU inference)

The reason why you've got the same inferencing time might be because the program falls to the available device which ends up being the same. For example: for CPU + MYRIAD, the program might unable to detect the presence of MYRIAD (or it's actually unavailable), hence it is executed using CPU.

This might help.

Iffa_Intel
  • 106
  • 3