I am in the process of writing an algorithm to carve out rectangular shapes for processing in iOS by using Apple's Vision Framework. The VNDetectRectangles seems to mostly work and it does accurately detect the shapes in question, but the four corner points regardless of how clean the shape is, always seem to leave a margin of padding between the detected contour and the actual shape.
It does this for real-world shapes and the exact same happens on an experimental shape I drew out below. It illustrates how the detector leaves margin between the detected shape in red and the actual shape.
I was wondering if anyone with more experience with this framework can elaborate on whether or not these detected shapes can be "tightened-up." My first thought is the margins look about even on all sides, so I could hack in a transform that shrinks the detected rectangle by some constant pad, but that's obviously a dirty solution likely to break whenever they alter the framework.
Hoping others have corrected for this before, as the carved out shapes put through a perspective transform come out warped due to this padding, and I imagine this Vision algorithm is meant for carving out subimages for processing.
I used the following parameters for the rectangle detect request. I looked through all the documentation and could not find a reference to any tolerance or padding that would tune the algorithm to more tightly wrap the detected shape.
rectDetectRequest.maximumObservations = 10
rectDetectRequest.quadratureTolerance = 15.0
rectDetectRequest.minimumConfidence = 0.6
rectDetectRequest.minimumAspectRatio = 0.8
I had success in the past in OpenCV using contour detection to accomplish the same thing, but this Vision framework is very clean and performant on the device and I would much rather stick to this. Any insight would be much appreciated