2

Problem description
We would like our onnx models to have some form of description, and ideally some other metadata including our internal version number. Currently we train with pytorch lightning and use onnxruntime for inference.

Below is a minimal executable example which assigns a model description by:

  1. Export withtorch.onnx.export
  2. Load with onnx.load
  3. Set model.doc_string
  4. Export with onnx.save
  5. Load with onnxruntime.InferenceSession

The question

Using onnx seems unnecessary, is there a way to include a model description when using torch.onnx.export?

Reproducible example

>>> import torch
>>> import torchvision
>>> from onnxruntime import InferenceSession
>>>
>>> # get a sample model
>>> dummy_input = torch.randn(10, 3, 224, 224)
>>> _model = torchvision.models.alexnet(pretrained=True)
>>> input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
>>> output_names = [ "output1" ]
>>>
>>> # export onnx
>>> torch.onnx.export(_model, dummy_input, "alexnet.onnx", verbose=False, input_names=input_names, output_names=output_names,strip_doc_string=False)
>>>
>>> # read in the exported onnx for infererence
>>> sess = InferenceSession('alexnet.onnx')
>>> meta = sess.get_modelmeta()
>>>
>>> # review onnx metadata
>>> meta.description
''

This downloads a demo model and when we print meta.description we can see it is blank.

I can set this description by loading the model with onnx and then saving it again

>>> import onnx
>>> model = onnx.load('alexnet.onnx')
>>> model.doc_string = 'my_description'
>>> onnx.save(model, 'alexnet2.onnx')
>>> sess = InferenceSession('alexnet2.onnx')
>>> meta = sess.get_modelmeta()
>>>
>>> # review onnx metadata
>>> meta.description
'my_description'
this_josh
  • 333
  • 2
  • 11
  • Find any better solutions? I'm curious about this for purposes of serializing reproducibility metadata (git hash, dataset version, etc) inside the model binary. It would be much better than a sidecar file since including in the `.onnx` means it can't get separated from the weights/topology – Addison Klinke May 31 '22 at 15:59
  • I'm afraid not. I started storing models as a `dir` which had applicable metadata as `.json` files, then our model loader would load the `dir`. Not necessarily elegant, but it maintained the metadata. – this_josh Jun 05 '22 at 07:41

0 Answers0