2

Working

I'm working on sagemaker jupyter notebook (environement: anaconda3/envs/mxnet_p36/lib/python3.6).

I run successfully this tutorial: https://github.com/onnx/tutorials/blob/master/tutorials/MXNetONNXExport.ipynb


Not working

Then, on the same evironement, I tried to apply the same process to files generated by a sagemaker training job. So, I used as input the S3 model artifact files, changing some lines of the tutorial code to meet my needs. I used built in object detection SSD VGG-16 network with hyperparameter image_shape: 300.

sym = './model_algo_1-symbol.json'
params = './model_algo_1-0000.params'
input_shape = (1,3,300,300)

And verbose=True as last parameter in export_model() method:

converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file, True)

When I run the code I got this error (verbose output at the end of the post):

MXNetError: Error in operator multibox_target: [14:36:32] src/operator/contrib/./multibox_target-inl.h:224: Check failed: lshape.ndim() == 3 (-1 vs. 3) : Label should be [batch, num_labels, label_width] tensor

Question

I was not able to find any solution so far:

  • maybe the input_shape = (1,3,300,300) is wrong, but I'm not able to find it out;
  • maybe the model contains some unexpected layer or so;

Does anybody knows a way to fix this problem or a workaround to use the model on a local machine?
(I mean without having to deploy to aws)


The verbose output:
  infer_shape error. Arguments:
  data: (1, 3, 300, 300)
  conv3_2_weight: (256, 256, 3, 3)
  fc7_bias: (1024,)
  multi_feat_3_conv_1x1_conv_weight: (128, 512, 1, 1)
  conv4_1_bias: (512,)
  conv5_3_bias: (512,)
  relu4_3_cls_pred_conv_bias: (16,)
  multi_feat_2_conv_3x3_relu_cls_pred_conv_weight: (24, 512, 3, 3)
  relu4_3_loc_pred_conv_bias: (16,)
  relu7_cls_pred_conv_weight: (24, 1024, 3, 3)
  conv3_3_bias: (256,)
  multi_feat_5_conv_3x3_relu_cls_pred_conv_weight: (16, 256, 3, 3)
  conv4_3_weight: (512, 512, 3, 3)
  conv1_2_bias: (64,)
  multi_feat_2_conv_3x3_relu_cls_pred_conv_bias: (24,)
  multi_feat_4_conv_3x3_conv_weight: (256, 128, 3, 3)
  conv4_1_weight: (512, 256, 3, 3)
  relu4_3_scale: (1, 512, 1, 1)
  multi_feat_4_conv_3x3_conv_bias: (256,)
  multi_feat_5_conv_3x3_relu_cls_pred_conv_bias: (16,)
  conv2_2_weight: (128, 128, 3, 3)
  multi_feat_3_conv_3x3_relu_loc_pred_conv_weight: (24, 256, 3, 3)
  multi_feat_5_conv_3x3_conv_bias: (256,)
  conv5_1_bias: (512,)
  multi_feat_3_conv_3x3_conv_bias: (256,)
  conv2_1_bias: (128,)
  conv5_2_weight: (512, 512, 3, 3)
  multi_feat_5_conv_3x3_relu_loc_pred_conv_weight: (16, 256, 3, 3)
  multi_feat_4_conv_3x3_relu_loc_pred_conv_weight: (16, 256, 3, 3)
  multi_feat_2_conv_3x3_conv_weight: (512, 256, 3, 3)
  multi_feat_2_conv_1x1_conv_bias: (256,)
  multi_feat_2_conv_1x1_conv_weight: (256, 1024, 1, 1)
  conv4_3_bias: (512,)
  relu7_cls_pred_conv_bias: (24,)
  fc6_bias: (1024,)
  conv2_1_weight: (128, 64, 3, 3)
  multi_feat_2_conv_3x3_conv_bias: (512,)
  multi_feat_2_conv_3x3_relu_loc_pred_conv_weight: (24, 512, 3, 3)
  multi_feat_5_conv_1x1_conv_bias: (128,)
  relu7_loc_pred_conv_bias: (24,)
  multi_feat_3_conv_3x3_relu_loc_pred_conv_bias: (24,)
  conv3_3_weight: (256, 256, 3, 3)
  conv1_2_weight: (64, 64, 3, 3)
  multi_feat_2_conv_3x3_relu_loc_pred_conv_bias: (24,)
  conv1_1_bias: (64,)
  multi_feat_4_conv_3x3_relu_cls_pred_conv_bias: (16,)
  conv4_2_weight: (512, 512, 3, 3)
  conv5_3_weight: (512, 512, 3, 3)
  relu7_loc_pred_conv_weight: (24, 1024, 3, 3)
  multi_feat_3_conv_3x3_conv_weight: (256, 128, 3, 3)
  conv3_1_weight: (256, 128, 3, 3)
  multi_feat_4_conv_3x3_relu_cls_pred_conv_weight: (16, 256, 3, 3)
  relu4_3_loc_pred_conv_weight: (16, 512, 3, 3)
  multi_feat_5_conv_3x3_conv_weight: (256, 128, 3, 3)
  fc7_weight: (1024, 1024, 1, 1)
  conv4_2_bias: (512,)
  multi_feat_3_conv_3x3_relu_cls_pred_conv_weight: (24, 256, 3, 3)
  multi_feat_3_conv_3x3_relu_cls_pred_conv_bias: (24,)
  conv2_2_bias: (128,)
  conv5_1_weight: (512, 512, 3, 3)
  multi_feat_3_conv_1x1_conv_bias: (128,)
  multi_feat_4_conv_3x3_relu_loc_pred_conv_bias: (16,)
  conv1_1_weight: (64, 3, 3, 3)
  multi_feat_4_conv_1x1_conv_bias: (128,)
  conv3_1_bias: (256,)
  multi_feat_5_conv_3x3_relu_loc_pred_conv_bias: (16,)
  multi_feat_4_conv_1x1_conv_weight: (128, 256, 1, 1)
  fc6_weight: (1024, 512, 3, 3)
  multi_feat_5_conv_1x1_conv_weight: (128, 256, 1, 1)
  conv3_2_bias: (256,)
  conv5_2_bias: (512,)
  relu4_3_cls_pred_conv_weight: (16, 512, 3, 3)
iGian
  • 11,023
  • 3
  • 21
  • 36

0 Answers0