0

I have converted a Keras model to a MLModel using coremltools 4.0 with limited success. It works but only if I use an MLMultiArray for the output and covert to an image. Converting to an image takes magnitudes longer than inferencing; making it unusable It was working for TensorFlow 1 and coremltools 3.4 but now for TensorFlow 2 and coremltools 4.0b1 is not. Adding a new layer to convert the output from [0,1] -> [0, 255] does not do the trick. We have also realised that some extra layers where added automatically by cormel that might be the problem. Here you have the image

I tried to transpose the input using np.transpose but it didn't solve the problem but created a new one. If the input follows the format (3, 256, 256) I get the following error:

RuntimeError: { NSLocalizedDescription = "Input image feature input_1 does not match model description"; NSUnderlyingError = "Error Domain=com.apple.CoreML Code=0 "Image height (256) is not in allowed range (200..400)" UserInfo={NSLocalizedDescription=Image height (256) is not in allowed range (200..400)}"; }

But if the size is (256, 256, 3) I get the following error:

NSLocalizedDescription = "Failed to convert output output_1 to image"; NSUnderlyingError = "Error Domain=com.apple.CoreML Code=0 "Invalid array shape (\n 256,\n 256,\n 1\n) for converting to gray image" UserInfo={NSLocalizedDescription=Invalid array shape (\n 256,\n 256,\n 1\n) for converting to gray image}";

Do you have any idea?

  • See here: https://stackoverflow.com/questions/63006397/mlmodel-works-with-multiarray-output-but-cannot-successfully-change-the-output-t – Matthijs Hollemans Jul 23 '20 at 09:28
  • Yes, I know. I tried to add a comment adding this information but my comment was deleted because "This does not really answer the question. If you have a different question, you can ask it by clicking Ask Question." – Antonio Esteban Jul 23 '20 at 09:40
  • It looked like the same question? Anyway, it's possible this is a coremltools v4 issue, since that's still in beta. The correct shape for images in Core ML is (3, 256, 256) and (1, 256, 256), not (256, 256, 3) or (256, 256, 1). – Matthijs Hollemans Jul 23 '20 at 12:31

1 Answers1

0

We found the error! The problem is that a transpose layer is automatically added with wrong index. In order to solve it, we use the following workaround:

transpose_layer = mlmodel_spec.neuralNetwork.layers[-4].transpose
del transpose_layer.axes[:]
transpose_layer.axes.extend([0, 1, 2, 3])