0

I have a onnx model of detectronV2 that has outputs that are not of a fixed size, they are dynamic. I was able to do inference in python with the onnxruntime:

    import onnxruntime

    # Initialize session and get prediction
    model_path = "./detectron2//tests//mask_rcnn_R_50_C4_3x.onnx"
    session = onnxruntime.InferenceSession(model_path, None)

    input_name = session.get_inputs()[0].name 
    output_boxes = session.get_outputs()[0].name #boxes coordinates
    output_val3 = session.get_outputs()[1].name #predicted classes
    output_val5 = session.get_outputs()[2].name # mask
    output_val = session.get_outputs()[3].name #scores
    output_onnxSplit = session.get_outputs()[4].name 

    result = session.run([output_boxes, output_val3,output_val5, output_val,output_onnxSplit ],{input_name: im})

How do I implement something similar with C++/Winrt using Windows.AI.MachineLearning? I am running into memory exceptions and incorrect parameters. Locally, I have a working solution for fixed onnx model outputs that is using the Windows.AI.MachineLearning::Bind, and then that calls Windows.AI.MachineLearning::Evaluate to run the inference. How can I bind dynamic outputs using Windows.AI.MachineLearning?

onnx model input and output type

Anna Maule
  • 268
  • 1
  • 9
  • Please show the code that doesn't exhibit the expected behavior. If you run into errors, make sure to reproduce the error diagnostic, verbatim here, too. – IInspectable Apr 11 '23 at 18:17

0 Answers0