I have been trying to get started with CoreML (Apple's Machine Learning Library). I am following these tutorials to get started
1) https://www.appcoda.com/coreml-introduction/
2) https://www.raywenderlich.com/164213/coreml-and-vision-machine-learning-in-ios-11-tutorial
The first tutorial uses a Inception V3 and the second tutorial uses Places205-GoogLeNet Model for the explanation.
After all the basic setting up steps
The Places205-GoogLeNet tutorial uses the following code
func detectScene(image: CIImage) {
answerLabel.text = "detecting scene..."
// Load the ML model through its generated class
guard let model = try? VNCoreMLModel(for: GoogLeNetPlaces().model) else {
fatalError("can't load Places ML model")
}
}
and the second code uses this
guard let prediction = try? model.prediction(image: pixelBuffer!) else {
return
}
What is the difference between these two approaches and which one is more recommended as both the types can be used to pass a pixel buffer and show result ?