0

I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).

From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).

Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?

I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.

PS: I can't use OpenCV.

Situation

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
Alex Suzuki
  • 1,083
  • 10
  • 18
  • connected components labeling (with stats). then you can look up the label for your point, and now you have a bounding box as well as a mask for that component. -- since that is a **QR code**, do a literature review. no need to reinvent the wheel. – Christoph Rackwitz Apr 20 '22 at 10:07
  • How about [Contour Features](https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html)? – Rotem Apr 20 '22 at 10:07
  • 1
    @ChristophRackwitz actually, it's the finder pattern of an Aztec code. But I'll look it up connected component labelling with stats, thank you. – Alex Suzuki Apr 20 '22 at 10:14
  • 1
    aztec! right. I jump to conclusions a bit too quickly sometimes. I'm sure there's literature on those too, and the algorithms to find those features likely overlap. – Christoph Rackwitz Apr 20 '22 at 10:16
  • @ChristophRackwitz i have the spec in front of me, and they describe lots of things really well (the finding of the bulls-eye, for instance), but they leave out the detection of corners and main axes unfortunately. I imagine this is easier with QR as you have three distinct finder patterns and can get the axes from those. – Alex Suzuki Apr 20 '22 at 10:25
  • @ChristophRackwitz just to follow up on "connected components with stats", I assume you mean the OpenCV function `connectedComponentsWithStats` (https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#gac7099124c0390051c6970a987e7dc5c5)? That's basically what I'm doing right now (just minus OpenCV), and the bounding box and area returned in the stats of that function are exactly what I am already determining with flood filling the point. I'm going to have a look at OpenCV's minAreaRect, since supports rotation. – Alex Suzuki Apr 20 '22 at 11:51

2 Answers2

1

Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.

When creating such histogram, 4 peeks will be found clearly.

  • If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees. directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks. After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).

  • If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates. So, for example, you can estimate corner location as intersection of 2 lines that along edge.

fana
  • 1,370
  • 2
  • 7
  • The latter case, image with very strong perspective can not be treated. Only available for the case that "blurred" peaks can still found as 4 peaks. For the strong case, other method must be emplyed. e.g. starting with founding vanishing points. – fana Apr 21 '22 at 05:56
  • I like your approach – Jeru Luke Apr 26 '22 at 18:43
0

What I eventually ended up doing is the following:

  1. Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.

  2. During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).

This is giving me accurate results for low computational overhead and little space.

Alex Suzuki
  • 1,083
  • 10
  • 18