0

I am using CoreML SqueezeNet Model to detect paper or rectangles from an image.

I have created model and request as per apple documentation.

guard let model = try? VNCoreMLModel(for: squeezeNetModel.model)else {fatalError()}
let request = VNCoreMLRequest(model: model) { (request, error) in
     guard let rectangles = request.results as? [VNClassificationObservation] else{ fatalError()}
}

Above code worked fine. But I want to detect paper and hence used [VNRectangleObservation] instead of [VNClassificationObservation]. This causes my app to crash. I don't find solution to this problem anywhere. The main reason I want to use [VNRecatangleObservation] is to capture detected image and draw red color overlay on detected image.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
cgeek
  • 558
  • 5
  • 18

1 Answers1

2

The reason your app crashes is that request.results is an array of VNClassificationObservation objects. You cannot cast this into an array of VNRectangleObservation objects, since that is something completely different. It's like buying a bottle of milk from the store and trying to turn it into a coke by putting a Coca-Cola label on the bottle. It doesn't work.

If you want to detect where in the image the objects occur, you'll need to use a different model, like squeezeDet (with a D) or YOLO.

Matthijs Hollemans
  • 7,706
  • 2
  • 16
  • 23
  • As per model , it returns [Any] as its results , hence I thought I could cast it. Thank you for answering. – cgeek Oct 20 '17 at 12:16