2

I have a CoreML model that I generated using CreateML. If I drag and drop that model into Xcode it will create a class for me automatically, which I can use to detect/classify the image. The generated class has prediction function which will return the class label.

My question is that if I can classify the image using the automatically generated model class then why should I use Vision Framework to classify an image OR what benefits Vision framework will provide over the auto-generated class method.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
john doe
  • 9,220
  • 23
  • 91
  • 167

1 Answers1

3

Think of Vision as a higher level abstraction that deals specifically with computer vision tasks, where Core ML can also do non-vision things (text, audio, tabular data, etc).

Vision makes it a little easier to work with images. For example, you can use UIImage with Vision but Core ML first requires you to convert the image to a CVPixelBuffer. Vision also has options for how you want to resize/crop the images before they're given to Core ML.

Using Vision also makes sense if you're doing multiple computer vision tasks on the image, i.e. not just running a Core ML model but also some of the built-in tasks, such as face detection.

Matthijs Hollemans
  • 7,706
  • 2
  • 16
  • 23
  • is a Vision Request also faster or does the Vision Framwork the same thing calling model.predict() in the background? – simplesystems Sep 06 '20 at 14:16
  • It's pretty much the same speed. Like you said, it also calls model.predict(). – Matthijs Hollemans Sep 06 '20 at 20:17
  • ok great thx! a bit off topic but you seem to have some coreML knowledge. maybe you can have a look at my question here: https://stackoverflow.com/questions/63765266/createml-what-kind-of-objectdetector-network-is-trained – simplesystems Sep 07 '20 at 05:13