1

Try using something like this:

std::vector<Ort::Value> ort_inputs;
  for (int i = 0; i < inputNames.size(); ++i) {
    ort_inputs.emplace_back(Ort::Value::CreateTensor<float>(
        memoryInfo, static_cast<float *>(inputs[i].data), inputs[i].get_size(),
        inputShapes[i].data(), inputShapes[i].size()));
  }

  std::vector<Ort::Value> outputTensors =
      session.Run(Ort::RunOptions{nullptr}, inputNames.data(),
                  ort_inputs.data(), 1, outputNames.data(), outputNames.size());

Now, my model is like this:

                           yolox_tiny_cpunms.onnx Detail
╭──────────────┬────────────────────────────────┬────────────────────────┬───────────────╮
│ Name         │ Shape                          │ Input/Output           │ Dtype         │
├──────────────┼────────────────────────────────┼────────────────────────┼───────────────┤
│ input        │ [1, 3, 416, 416]               │ input                  │ float32       │
│ boxes        │ [1, -1, -1]                    │ output                 │ float32       │
│ scores       │ [1, -1]                        │ output                 │ float32       │
│ labels       │ [1, -1]                        │ output                 │ int64         │
╰──────────────┴────────────────────────────────┴────────────────────────┴───────────────╯

As you can see, the output is dynamic, but C++ code output Tensor give me shape [1, 0, 4] , [1, 0], [1,0]

How can I get the output shape in. C++?

Progman
  • 16,827
  • 6
  • 33
  • 48
Nicholas Jela
  • 2,540
  • 7
  • 24
  • 40
  • I also need solution to such a problem. I'm running a face detection and the output of the model depends on the number of detected faces (might be no face or N faces). How to create outputTensor? – fisakhan Nov 24 '22 at 10:58

1 Answers1

1

I get the output shape in C++ like this

auto outputTensor = session.Run(runOptions, inputNames.data(), &inputTensor, 1, outputNames.data(), 1);
assert(outputTensor.size() == 1 && outpuTensor.front().IsTensor());
if (outputTensor[0].IsTensor())
{
auto outputInfo = outputTensor[0].GetTensorTypeAndShapeInfo();
std::cout << "GetElementType: " << outputInfo.GetElementType() << "\n";
std::cout << "Dimensions of the output: " << outputInfo.GetShape().size() << "\n";
std::cout << "Shape of the output: ";
for (unsigned int shapeI = 0; shapeI < outputInfo.GetShape().size(); shapeI++)
          std::cout << outputInfo.GetShape()[shapeI] << ", ";
}
fisakhan
  • 704
  • 1
  • 9
  • 27
  • can not understand your code very much, can u elabrate more? – Nicholas Jela Dec 03 '22 at 10:17
  • change your session.Run() command as mentioned (also here https://github.com/microsoft/onnxruntime/issues/4466 ). Once you get output of the inference (outputTensor in this example code) then you can follow this code to find the shape of received output. – fisakhan Dec 04 '22 at 20:59