Using Apple's Create ML application(a developer tool that come with Xcode), I trained an image classification model and downloaded it. I then loaded the model as part of a python project using coremltools
package:
import coremltools
import PIL.Image
def load_image(path, resize_to=None):
img = PIL.Image.open(path)
if resize_to is not None:
img = img.resize(resize_to, PIL.Image.ANTIALIAS)
r, g, b= img.split()
img = PIL.Image.merge("RGB", (b, g, r))
return img
model = coremltools.models.MLModel('classification1.mlmodel')
img_dir = "img1"
img = load_image(img_dir, resize_to=(299, 299))
result = model.predict({'image': img})
print(result)
This code printed a predicted classlabel
different from the prediction result I got when I predict the label for img1
directly in the Create ML application. I believe the application did some adjustment to the input image before predict the class label for the image. When I print(model)
, I got some information about the input:
input {
name: "image"
shortDescription: "Input image to be classified"
type {
imageType {
width: 299
height: 299
colorSpace: BGR
imageSizeRange {
widthRange {
lowerBound: 299
upperBound: -1
}
heightRange {
lowerBound: 299
upperBound: -1
}
}
}
}
}
I believe I have made the required adjustment by adjusting the image size and converting the color space. Why does the prediction between the code and the application doesn't agree?