0

I've built an Encoder/Decoder model (in PyTorch), saved as two separate mlmodel objects. I want to put these together in a coremltools.models.pipeline, for efficiency purposes. With the two input models saved to disk, this is what I use to build the pipeline:

from coremltools.models.pipeline import *
from coremltools.models import datatypes

input_features = [('distorted_input', datatypes.Array(28*28))]
output_features = ['z_distribution', 'rectified_input']

pipeline = Pipeline(input_features, output_features)
pipeline.add_model(enc_mlmodel)
pipeline.add_model(dec_mlmodel)

pipeline_model = coremltools.models.MLModel(pipeline.spec)
pipeline_model.save('inputFixerPipeline.mlmodel')

The creation of the pipeline runs fine, but the model that's saved fails to connect the input -- i.e., looking at the model in Netron, I see that the distorted_input node is just hanging on its own. The rest of the pipeline appears to be correct.

Any thoughts?

jbm
  • 1,248
  • 10
  • 22

1 Answers1

0

Answering my own question: I had an argument for image_input_names on the 2nd model in my pipeline. In fact, it doesn't take an image, but just a tensor, so I suppose it was somehow confusing the pipeline builder. Removing the image_input_names entry correct the pipeline model right away.

Hopefully this helps someone avoid some time in future.

jbm
  • 1,248
  • 10
  • 22