I have converted a Keras model to a MLModel using coremltools 4.0 with limited success. It works but only if I use an MLMultiArray for the output and covert to an image. Converting to an image takes magnitudes longer than inferencing; making it unusable It was working for TensorFlow 1 and coremltools 3.4 but now for TensorFlow 2 and coremltools 4.0b1 is not. Adding a new layer to convert the output from [0,1] -> [0, 255] does not do the trick. We have also realised that some extra layers where added automatically by cormel that might be the problem. Here you have the image
I tried to transpose the input using np.transpose but it didn't solve the problem but created a new one. If the input follows the format (3, 256, 256) I get the following error:
RuntimeError: { NSLocalizedDescription = "Input image feature input_1 does not match model description"; NSUnderlyingError = "Error Domain=com.apple.CoreML Code=0 "Image height (256) is not in allowed range (200..400)" UserInfo={NSLocalizedDescription=Image height (256) is not in allowed range (200..400)}"; }
But if the size is (256, 256, 3) I get the following error:
NSLocalizedDescription = "Failed to convert output output_1 to image"; NSUnderlyingError = "Error Domain=com.apple.CoreML Code=0 "Invalid array shape (\n 256,\n 256,\n 1\n) for converting to gray image" UserInfo={NSLocalizedDescription=Invalid array shape (\n 256,\n 256,\n 1\n) for converting to gray image}";
Do you have any idea?