0

Steps I followed:

  • Saved tensorflow model using the saved_model function provided by TF.
  • Run OpenVino optimizer for TF using the following command:
python3 mo_tf.py --saved_model_dir $PATH_TO_SAVED_MODEL --output_dir $OUTPUT_PATH --input name_input_layer_1,name_input_layer_2 --input_shape [1,30,180,320,3],[1,30,180,320,3] --model_name model1
  • Import .xml and .bin files from $OUTPUTH_PATH in the code:
ie = IECore()
net = ie.read_network(model='OUTPUT_PATH/model1.xml', weights='OUTPUT_PATH/model1.bin')
exec_net = ie.load_network(network=net, device_name="CPU")
  • Predict result from the model:
exec_net.infer({ "name_input_layer_1": a_sample, "name_input_layer_2": b_sample })

When the code arrives to the infer line it raises the following error:

ValueError: could not broadcast input array from shape (1,30,180,320,3) into shape (1,3,30,180,320)

I tried giving the shape of the input when I runned the optimizer but it did not work. I also tried adding a batch number insted and it did not work either. I know tensorflow works with channels last by default, but for some reason when I make the predict openvino still changes the order. Am I missing something, any help would be appreciated.

Polo D. Vargas
  • 1,649
  • 2
  • 14
  • 23

1 Answers1

0

Looks like the program is expecting shape (1, 3, 30, 180, 320), in your first command you've specified input shape as (1, 30, 180, 320, 3).

UdonN00dle
  • 723
  • 6
  • 28