2
// init bounding
CGRect rect = CGRectMake(0, 0, 0.3, 0.3);
VNSequenceRequestHandler* reqImages = [[VNSequenceRequestHandler alloc] init];
VNRectangleObservation* ObserveRect = [VNRectangleObservation observationWithBoundingBox:rect];
VNTrackRectangleRequest* reqRect = [[VNTrackRectangleRequest alloc] initWithRectangleObservation:ObserveRect];
NSArray<VNRequest *>* requests = [NSArray arrayWithObjects:reqRect, nil];
BOOL bsucc = [reqImages performRequests:requests onCGImage:img.CGImage error:&error];

// get tracking bounding
VNDetectRectanglesRequest* reqRectTrack = [VNDetectRectanglesRequest new];
NSArray<VNRequest *>* requestsTrack = [NSArray arrayWithObjects:reqRectTrack, nil];
[reqImages performRequests:requestsTrack onCGImage:img.CGImage error:&error];

VNRectangleObservation* Observe = [reqRectTrack.results firstObject];
CGRect boundingBox = Observe.boundingBox;

Why the boundingBox value is incorrect?

How can i find the demo of vision.framework of ios11 ?

nathan
  • 9,329
  • 4
  • 37
  • 51
Alberl
  • 43
  • 1
  • 6
  • I run into the same problem as you, I have found the example used in Vision Keynote, they do resizing on `boundingBox` values. But not working on my side. Here is the sample : https://developer.apple.com/sample-code/wwdc/2017/ImageClassificationwithVisionandCoreML.zip Let me know if you find the solution – Akhu Jun 09 '17 at 20:03
  • I find the demo in the Keynote too, ' // Create request handler let requestHandler = VNSequenceRequestHandler() // Start the tracking with an observation let observations = detectionRequest.results as! [VNDetectedObjectObservation] let objectsToTrack = observations.map { VNTrackObjectRequest(detectedObjectObservation: $0) } // Run the requests requestHandler.perform(objectsToTrack, on: pixelBuffer) // Lets look at the results for request in objectsToTrack for observation in request.results as! [VNDetectedObjectObservation] ' but it don't working. – Alberl Jun 12 '17 at 04:36
  • @Alberl Do you got any solution for detecting object from static image , I am facing problem for converting the points from one coordinate system to another. – The iCoder Sep 20 '17 at 06:50

2 Answers2

4

Vision Framework tracking an object, a demo for this can be found at this link:

https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision Image Credit: Jeffrey Bergier

The Blogger goes into great details here of getting the demo working and has a gif showing a working build.

Hope this is what you are after.

4

Here is my simple example of using Vision framework: https://github.com/artemnovichkov/iOS-11-by-Examples. I guess you have a problem with different coordinate systems. Pay attention to rect converting:

cameraLayer.metadataOutputRectConverted(fromLayerRect: originalRect)

and

cameraLayer.layerRectConverted(fromMetadataOutputRect: transformedRect)

Artem Novichkov
  • 2,356
  • 2
  • 24
  • 34