as I'm learning Apple's Vision and CoreML framework but got stuck on how to use my own re-trained models. I tried training a VG16 model with Keras based on this tutorial. Everything looks OK except for some Keras version warnings. Then I tried converting the resulted model with CoreMLTools with the following code
coremlModel = coremltools.converters.keras.convert(
kmodel,
input_names = 'image',
image_input_names = 'image',
output_names = 'classLabelProbs',
class_labels = ['cats', 'dogs'],
)
During the conversion it gave me some version compatible warnings but otherwise it was successful:
WARNING:root:Keras version 2.0.6 detected. Last version known to be fully compatible of Keras is 2.0.4 .
WARNING:root:TensorFlow version 1.2.1 detected. Last version known to be fully compatible is 1.1.1 .
So I loaded this model into Apple's Vision+ML example code, but every time I tried to classify an image it failed with errors
Vision+ML Example[2090:2012481] Error: The VNCoreMLTransform request failed
Vision+ML Example[2090:2012481] Didn't get VNClassificationObservations
Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x1c025d130 {Error Domain=com.apple.CoreML Code=0 "Dimensions of layer 'classLabelProbs' is not the same size as the number of class labels." UserInfo={NSLocalizedDescription=Dimensions of layer 'classLabelProbs' is not the same size as the number of class labels.}}}
I was guessing this is because the pre-trained VGG16 model has already had 1000 categories, so I tried 1000 categories and 1000 + 2 (cats and dogs) categories, but still got the same problem.
Did I miss anything? I'd greatly appreciate any clue and help.