I am using OpenCV on iOS to detect a rectangular label to assist users in snapping a photo of that label. I have an overlay that presents once the matches threshold is met.
My question is, does that patch image used have to be exact? The labels I am detecting have text on them that vary from label to label. All the same font but different characters. Is it possible to train OpenCV with a patch images color and/or size/dimensions? Or is there perhaps another way around this issue?
Here is a close example to the labels Im scanning, EXCEPT THERE ARE NO ICONS AND ALL ONE FONT TYPE.
Here is the tutorial I am following, which is achieved with an image of a target. http://www.raywenderlich.com/59999/make-augmented-reality-target-shooter-game-opencv-part-3