3

I have a binary image with dots, which I obtained using OpenCV's goodFeaturesToTrack, as shown on Image1.

Image1 : Cloud of points

I would like to fit a grid of 4*25 dots on it, such as the on shown on Image2 (Not all points are visible on the image, but it is a regular 4*25 points rectangle).

Image2 : Model grid of points

My model grid of 4*25 dots is parametrized by : 1 - The position of the top left corner 2 - The inclination of the rectangle with the horizon The code below shows a function that builds such a model.

This problem seems to be close to a chessboard corner problem.

I would like to know how to fit my model cloud of points to the input image and get the position and angle of the cloud. I can easily measure a distance in between the two images (the input one and the on with the model grid) but I would like to avoid having to check every pixel and angle on the image for finding the minimum of this distance.

def ModelGrid(pos, angle, shape):

    # Initialization of output image of size shape
    table = np.zeros(shape)

    # Parameters 
    size_pan = [32, 20]# Pixels
    nb_corners= [4, 25]
    index = np.ndarray([nb_corners[0], nb_corners[1], 2],dtype=np.dtype('int16'))
    angle = angle*np.pi/180

    # Creation of the table
    for i in range(nb_corners[0]):
        for j in range(nb_corners[1]):
            index[i,j,0] = pos[0] + j*int(size_pan[1]*np.sin(angle)) + i*int(size_pan[0]*np.cos(angle))
            index[i,j,1] = pos[1] + j*int(size_pan[1]*np.cos(angle)) - i*int(size_pan[0]*np.sin(angle))

            if 0 < index[i,j,0] < table.shape[0]:
                if 0 < index[i,j,1] < table.shape[1]:
                    table[index[i,j,0], index[i,j,1]] = 1

    return table

1 Answers1

0

A solution I found, which works relatively well is the following :

First, I create an index of positions of all positive pixels, just going through the image. I will call these pixels corners.

I then use this index to compute an average angle of inclination : For each of the corners, I look for others which would be close enough in certain areas, as to define a cross. I manage, for each pixel to find the ones that are directly on the left, right, top and bottom of it. I use this cross to calculate an inclination angle, and then use the median of all obtained inclination angles as the angle for my model grid of points.

Once I have this angle, I simply build a table using this angle and the positions of each corner. The optimization function measures the number of coincident pixels on both images, and returns the best position.

This way works fine for most examples, but the returned 'best position' has to be one of the corners, which does not imply that it corresponds to the best position... Mainly if the top left corner of the grid within the cloud of corners is missing.