0

While I use Python onnxruntime to run a model, I get the result and extract what I need from it, like this:

y = session.run(None, inputs) # The shape of y is [1, m, n, 2]
scores1 = y[0, :, :, 0]
scores2 = y[0, :, :, 1]

Note the output shape is dynamic, not in a fixed shape.

When I use C++ onnxruntime to run a model like this:

auto output_tensors = session.Run(
    Ort::RunOptions{nullptr},
    input_node_names.data(), &input_tensor, num_input_nodes,
    output_node_names.data(), num_output_nodes);

Well, how should I extract scores1 and scores2 from output_tensors like Python? Do I have to extract them by iterate output_tensors?

0 Answers0