0

I'm making some test converting ML models from several frameworks to ONNX, but I'm consistently getting warnings and errors related with the opset, E.G. the bellow code is for a lgmb model:

import numpy as np
import lightgbm as lgb
import timeit
import onnxruntime as ort
from onnxmltools.convert import convert_lightgbm
from onnxconverter_common.data_types import FloatTensorType

# Create some random data for binary classification
max_depth = 8
num_classes = 2
n_estimators = 1000
n_features = 3000
n_fit = 100
n_pred= 100000
X = np.random.rand(n_fit, n_features)
X = np.array(X, dtype=np.float32)
y = np.random.randint(num_classes, size=n_fit)
test_data = np.random.rand(n_pred, n_features).astype('float32')

model = lgb.LGBMClassifier(n_estimators=n_estimators, max_depth=max_depth, pred_early_stop=False)
model.fit(X, y)

# Use ONNXMLTOOLS to convert the model to ONNXML
input_types = [("input", FloatTensorType([n_pred, n_features]))] # Define the inputs for the ONNX
onnx_ml_model = convert_lightgbm(model, initial_types=input_types)

At the last line, I get the following error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-13-11c5c9ef194b> in <module>
      1 # Use ONNXMLTOOLS to convert the model to ONNXML
      2 input_types = [("input", FloatTensorType([n_pred, n_features]))] # Define the inputs for the ONNX
----> 3 onnx_ml_model = convert_lightgbm(model, initial_types=input_types)
.............................
RuntimeError: target_opset 15 is higher than the number of the installed onnx package or the converter support (13).

The onnx version I have installed 1.10.2 which is the most recent one, the same for the onnxmltools the version I have is 1.10.0 which is also the most recent one.

To solve that I can use the parameter target_opset in the function convert_lightgbm, e.g.

onnx_ml_model = convert_lightgbm(model, initial_types=input_types,target_opset=13)

For that parameter I get the following message/warning:

The maximum opset needed by this model is only 9.

I get the same message for target_opset between 13 and 9, for target_opset bellow 9 I don´t get any message.

To continue I leave the model with target_opset = 13:

onnx_ml_model = convert_lightgbm(model, initial_types=input_types,target_opset=13)

And run the bellow code to quantize the model:

import onnx
from onnxruntime.quantization import quantize_dynamic, QuantType,quantize_qat

model_path = "ONNX_edge_deployment/src/APIs/YOLO_ONNX/lgbm.onnx"
model_quant = 'ONNX_edge_deployment/src/APIs/YOLO_ONNX/lgbm_quant.onnx'

onnx.save(onnx_ml_model, model_path)    
quantized_model = quantize_qat(new_model_path, model_quant)

But at the last line, I get the following error:

InferenceError                            Traceback (most recent call last)
..............
~\Anaconda3\lib\site-packages\onnx\shape_inference.py in infer_shapes(model, check_type, strict_mode, data_prop)
     40     if isinstance(model, (ModelProto, binary_type)):
     41         model_str = model if isinstance(model, binary_type) else model.SerializeToString()
---> 42         inferred_model_str = C.infer_shapes(model_str, check_type, strict_mode, data_prop)
     43         return onnx.load_from_string(inferred_model_str)
     44     elif isinstance(model, string_types):

InferenceError: [ShapeInferenceError] (op_type:ZipMap, node name: ZipMap): [ShapeInferenceError] type case unsupported for symbolic shape inference. inferred=5

Also there seems to be something odd related with the version of the model, following this tutorial of opsets at http://onnx.ai/, in particular this part:

domains = onnx_ml_model.opset_import
for dom in domains:
    print("domain: %r, version: %r" % (dom.domain, dom.version))

I get:

domain: '', version: 9
domain: 'ai.onnx.ml', version: 1

But in the tutorial they get:

domain: '', version: 15
domain: 'ai.onnx.ml', version: 2

How is that in a older tutorial they get newer version on both cases? 9 vs 15 and 1 vs 2

Luis Ramon Ramirez Rodriguez
  • 9,591
  • 27
  • 102
  • 181
  • Hi, were you able to find a solution? I am facing the same error `InferenceError: [ShapeInferenceError] (op_type:ZipMap, node name: ZipMap): [ShapeInferenceError] type case unsupported for symbolic shape inference. inferred=5` – pratsbhatt Jan 26 '22 at 13:17

0 Answers0