1

Error:

when I use caffe2 for pretraind model. the model is from https://github.com/onnx/models/blob/master/vision/classification/vgg/model/vgg16-7.onnx the model I use is pretrained model, I do not change the model, and not use https://github.com/onnx/optimizer

the code is:

import caffe2

model=onnx.load(vgg16-7.onnx)

prepared_backend=caffe2.python.onnx.backend.prepare(model)

then an error happend:

WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
WARNING: ONNX Optimizer has been moved to https://github.com/onnx/optimizer.
All further enhancements and fixes to optimizers will be done in this new repo.
The optimizer code in onnx/onnx repo will be removed in 1.9 release.

Traceback (most recent call last):
File "test.py", line 20, in
init_net, predict_net = c2.onnx_graph_to_caffe2_net(onnx_model_proto)
File "/home/eeodev/.local/lib/python3.6/site-packages/caffe2/python/onnx/backend.py", line 921, in onnx_graph_to_caffe2_net
return cls._onnx_model_to_caffe2_net(model, device=device, opset_version=opset_version, include_initializers=True)
File "/home/eeodev/.local/lib/python3.6/site-packages/caffe2/python/onnx/backend.py", line 876, in _onnx_model_to_caffe2_net
onnx_model = onnx.utils.polish_model(onnx_model)
File "/usr/local/lib64/python3.6/site-packages/onnx/utils.py", line 24, in polish_model
model = onnx.optimizer.optimize(model)
File "/usr/local/lib64/python3.6/site-packages/onnx/optimizer.py", line 55, in optimize
optimized_model_str = C.optimize(model_str, passes)
IndexError: Input 475 is undefined!

who can tell the solution?

another,if it is a pytorch model,when conver to onnx model ,we can use torch.onnx.export(model, input, 'model.onnx', verbose=True, keep_initializers_as_inputs=True), by keep_initializers_as_inputs=True , use caffe2 load model will not happen than error. but the model I used is pretrained model ,how to use this method?

Steven
  • 1,996
  • 3
  • 22
  • 33
wwbnjs
  • 21
  • 5

1 Answers1

0

I believe it is related to IR gap issue: https://github.com/onnx/onnx/issues/2902. Currently the deprecated ONNX optimizer in ONNX repo cannot deal with ONNX model which IR_VERSION >=4 if the initializer are not included in model's input. The workaround is to use the following script to let your model include input from initializer (contributed by @TMVector in GitHub):

def add_value_info_for_constants(model : onnx.ModelProto):
    """
    Currently onnx.shape_inference doesn't use the shape of initializers, so add
    that info explicitly as ValueInfoProtos.
    Mutates the model.
    Args:
        model: The ModelProto to update.
    """
    # All (top-level) constants will have ValueInfos before IRv4 as they are all inputs
    if model.ir_version < 4:
        return

    def add_const_value_infos_to_graph(graph : onnx.GraphProto):
        inputs = {i.name for i in graph.input}
        existing_info = {vi.name: vi for vi in graph.value_info}
        for init in graph.initializer:
            # Check it really is a constant, not an input
            if init.name in inputs:
                continue

            # The details we want to add
            elem_type = init.data_type
            shape = init.dims

            # Get existing or create new value info for this constant
            vi = existing_info.get(init.name)
            if vi is None:
                vi = graph.value_info.add()
                vi.name = init.name

            # Even though it would be weird, we will not overwrite info even if it doesn't match
            tt = vi.type.tensor_type
            if tt.elem_type == onnx.TensorProto.UNDEFINED:
                tt.elem_type = elem_type
            if not tt.HasField("shape"):
                # Ensure we set an empty list if the const is scalar (zero dims)
                tt.shape.dim.extend([])
                for dim in shape:
                    tt.shape.dim.add().dim_value = dim

        # Handle subgraphs
        for node in graph.node:
            for attr in node.attribute:
                # Ref attrs refer to other attrs, so we don't need to do anything
                if attr.ref_attr_name != "":
                    continue

                if attr.type == onnx.AttributeProto.GRAPH:
                    add_const_value_infos_to_graph(attr.g)
                if attr.type == onnx.AttributeProto.GRAPHS:
                    for g in attr.graphs:
                        add_const_value_infos_to_graph(g)


    return add_const_value_infos_to_graph(model.graph)
Adrian Mole
  • 49,934
  • 160
  • 51
  • 83
  • thanks your code ,when i use your code,the "IndexError: Input 475 is undefined!" problems is solved .but another problem happen. when run the code " import caffe2.python.onnx.backend as backend model=onnx.load(model_path) model=add_value_info_for_constants(model)" when i run the above code ,there is no error hanppend.but when i run this code β€œrep=backend.prepare(model,device="CPU")” an error happend," Message='NoneType' object has no attribute 'SerializeToString' " do you know why? – wwbnjs Mar 11 '21 at 03:28
  • Please just use the original model and you don't need to catch it. Model will be updated after add_value_info_for_constants(model). – Chun-Wei Chen Mar 13 '21 at 23:47