0

I would like to implement a custom image classifier using MaskRCNN.

In order to increase the speed of the network, i would like to optimise the inference.

I already used OpenCV DNN library, but i would like to do a step forward with OpenVINO.

I used successfully OpenVINO Model optimiser (python), to build the .xml and .bin file representing my network.

I successfully builded OpenVINO Sample directory with Visual Studio 2017 and run MaskRCNNDemo project.

mask_rcnn_demo.exe -m .\Release\frozen_inference_graph.xml -i .\Release\input.jpg

InferenceEngine:
        API version ............ 1.4
        Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     .\Release\input.jpg
[ INFO ] Loading plugin

        API version ............ 1.5
        Build .................. win_20181005
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (4288, 2848) to (800, 800)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)

Average running time of one iteration: 2593.81 ms

[ INFO ] Processing output blobs
[ INFO ] Detected class 16 with probability 0.986519: [2043.3, 1104.9], [2412.87, 1436.52]
[ INFO ] Image out.png created!
[ INFO ] Execution successful

Oiseau VINO CPP

Then i tried to reproduce this project in a separate project... First i had to watch dependancies...

<MaskRCNNDemo>
     //References
     <format_reader/>    => Open CV Images, resize it and get uchar data
     <ie_cpu_extension/> => CPU extension for un-managed layers (?)

     //Linker
     format_reader.lib         => Format Reader Lib (VINO Samples Compiled)
     cpu_extension.lib         => CPU extension Lib (VINO Samples Compiled)
     inference_engined.lib     => Inference Engine lib (VINO)
     opencv_world401d.lib      => OpenCV Lib
     libiomp5md.lib            => Dependancy
     ... (other libs)

With it i've build a new project, with my own classes and way to open images (multiframe tiff). This work without problem then i will not describe (i use with a CV DNN inference engine without problem).

I wanted to build the same project than MaskRCNNDemo : CustomIA

<CustomIA>
     //References
     None => I use my own libtiff way to open image and i resize with OpenCV
     None => I will just add include to cpu_extension source code.

     //Linker
     opencv_world345d.lib   => OpenCV 3.4.5 library
     tiffd.lib              => Libtiff Library
     cpu_extension.lib      => CPU extension compiled with sample
     inference_engined.lib  => Inference engine lib.

I added the following dll to the project target dir :

cpu_extension.dll
inference_engined.dll
libiomp5md.dll
mkl_tiny_omp.dll
MKLDNNPlugind.dll
opencv_world345d.dll
tiffd.dll
tiffxxd.dll

I successfully compiled and execute but i faced two issues :

OLD CODE:

 slog::info << "Loading plugin" << slog::endl;
    InferencePlugin plugin = PluginDispatcher({ FLAGS_pp, "../../../lib/intel64" , "" }).getPluginByDevice(FLAGS_d);

    /** Loading default extensions **/
    if (FLAGS_d.find("CPU") != std::string::npos) {
        /**
         * cpu_extensions library is compiled from "extension" folder containing
         * custom MKLDNNPlugin layer implementations. These layers are not supported
         * by mkldnn, but they can be useful for inferring custom topologies.
        **/
        plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
    }
    /** Printing plugin version **/
    printPluginVersion(plugin, std::cout);

OUTPUT :

[ INFO ] Loading plugin
    API version ............ 1.5
    Build .................. win_20181005
    Description ....... MKLDNNPlugin

NEW CODE:

    VINOEngine::VINOEngine()
{
    // Loading Plugin
    std::cout << std::endl;
    std::cout << "[INFO] - Loading VINO Plugin..." << std::endl;
    this->plugin= PluginDispatcher({ "", "../../../lib/intel64" , "" }).getPluginByDevice("CPU");
    this->plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
    printPluginVersion(this->plugin, std::cout);

OUTPUT :

[INFO] - Loading VINO Plugin...
000001A242280A18  // Like memory adress ???

Second Issue :

When i try to extract my ROI and masks from New Code, if i have a "match", i always have :

  • score =1.0
  • x1=x2=0.0
  • y1=y2=1.0

But the mask looks well extracted...

New Code :

        float score = box_info[2];
        if (score > this->Conf_Threshold)
        {
            // On reconstruit les coordonnées de la box..
            float x1 = std::min(std::max(0.0f, box_info[3] * Image.cols), static_cast<float>(Image.cols));
            float y1 = std::min(std::max(0.0f, box_info[4] * Image.rows), static_cast<float>(Image.rows));
            float x2 = std::min(std::max(0.0f, box_info[5] * Image.cols), static_cast<float>(Image.cols));
            float y2 = std::min(std::max(0.0f, box_info[6] * Image.rows), static_cast<float>(Image.rows));
            int box_width = std::min(static_cast<int>(std::max(0.0f, x2 - x1)), Image.cols);
            int box_height = std::min(static_cast<int>(std::max(0.0f, y2 - y1)), Image.rows);

Vino Mask

Image is resized from (4288, 2848) to (800, 800)
Detected class 62 with probability 1: [4288, 0], [4288, 0]

Then it is impossible for me to place the mask in the image and resize it while i don't have correct bbox coordinate...

Do anybody have an idea about what i make badly ?

How to create and link correctly an OpenVINO project using cpu_extension ?

Thanks !

Community
  • 1
  • 1
FrsECM
  • 245
  • 2
  • 16
  • 1
    afaik, it is possible to compile OpenCV with OpenVINO. Would this be an option? https://www.learnopencv.com/using-openvino-with-opencv/ – Micka Feb 07 '19 at 09:39
  • I already tried to change theses to option in my Opencv custom engine (that i didn't present here) : net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE) net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) But it seems not to work unfortunately. – FrsECM Feb 07 '19 at 14:40
  • 1
    did you compile OpenCV with OpenVINO? – Micka Feb 07 '19 at 15:10
  • Is there a flag in cmake in order to do this ? I didn't do any things like that. I'll try if it is possible – FrsECM Feb 07 '19 at 18:03
  • never tried it mysrlf, I hope there is a manual/tutorial in the learnopencv link provided in my first comment – Micka Feb 07 '19 at 20:00
  • I have issues with AVX2/512 support when i try to configure. I'll try to download and compile again opencv (i did it few monthes ago !. Thanks for the idea ! I'll keep you aware ! – FrsECM Feb 08 '19 at 10:06
  • 1
    @Micka I succeed to use OpenVINO through OpenCV with OpenCV 4.0.1 that is included in OpenVINO toolkit. It uses different dll than dll i used before.. Maybe the reason i faced problem... I don't know, and i won't dig more because processing time is now correct for what i want in the end (arround 2,5sec / frame on basic CPU). Thanks ! – FrsECM Feb 09 '19 at 12:26

1 Answers1

0

First issue with version: look above printPluginVersion function, you will see overloaded std::ostream operators for InferenceEngine and plugin version info.

Second: You can try to debug your model by comparing output after very first convolution and output layer for original framework and OV. Make sure it's equal element by element.

In OV you can use network.addOutput("layer_name") to add any layer to output. Then read output by using: const Blob::Ptr debug_blob = infer_request.GetBlob("layer_name").

Most of the time with issues like this i finding missing of input pre-processing (mean, normalization, etc.)

cpu_extensions is a dynamic library, but you still can change cmake script to make it static and link it with your application. After that you would need to use your application path with call to IExtensionPtr extension_ptr = make_so_pointer(argv[0])

Dmitry
  • 46
  • 4