I have a CoreML model that I generated using CreateML. If I drag and drop that model into Xcode it will create a class for me automatically, which I can use to detect/classify the image. The generated class has prediction function which will return the class label.
My question is that if I can classify the image using the automatically generated model class then why should I use Vision Framework to classify an image OR what benefits Vision framework will provide over the auto-generated class method.