3

I am currently implementing an algorithm for identifying the axis of minimum inertia of a colored mass (provided by the second moments). In order to do so, I need to acquire the centre of mass, as given by the first moments.

The weighted averaging function works well, but due to outlier pixels, I am receiving undesired results.

Here is the averaging function:

(e.g. x's weighted average)

for (i = 0, i < rows, i++) {
    for (j = 0, j < cols, j++) {
        if (colorAt(i,j).isForeground()) {
            tempSumX++;
            totalForeground++;
        }
    }
    x_ += i*tempSumX;
    tempSumX = 0;
}
x_ /= totalForeground; //where x_ represents the x coordinate of the weighted center of mass.

Incorrect Center of Mass

Given an image such as this, which is represented by exclusively two colors (background and foreground), how can I remove outlying pixels? Note: Outlying pixels refers to anything not part of the big color-mass. The white dot is the calculated center of mass, which is incorrect.

Much appreciated.

JT Cho
  • 263
  • 1
  • 4
  • 11
  • Have you looked at morphological filters? – mathematician1975 Jul 13 '12 at 13:53
  • I considered them, but I'm not sure how well they will work in my case. Just not too well-informed. I was also looking at graph theory to identify connections. – JT Cho Jul 13 '12 at 13:55
  • It does not look like an average, or do you have any outliers not visible in the image? What exactly do you weigh when calculating the weighted average? – TaZ Jul 13 '12 at 17:45
  • "due to outlier pixels, I am receiving undesired results." On images where there are no smaller, separate pixels as in the one I show, the weighted center of mass is correct. Or perhaps not. Allow me to look at my program again.. – JT Cho Jul 13 '12 at 19:43
  • The current algorithm I'm using for weighted average is in the post above now. – JT Cho Jul 13 '12 at 19:52
  • That's the actual size of the binary image. What I'm doing is taking a color cluster from my k-means algorithm, and iterating through a portion of a given image to extract all colors that are deemed in that cluster, producing what you see here. I can provide some updated pictures for better results. – JT Cho Jul 14 '12 at 02:35
  • The point was displayed incorrectly; I accidentally swapped the x,y values in the point. As you'll notice, 16 and 11 are simply interchanged from what you figured. – JT Cho Jul 16 '12 at 01:55

2 Answers2

1

There are a lot of flood fill algorithms that would identify all the connected pixels given a starting point.

Alternatively a common way to remove small outliars like these that come from noise is to erode the image, then dilate it to return to the same size - although if you are purely doing CoG you don't necessarily need the dilate step

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263
  • I was looking at morphological filters on mathematician1975's recommendation, but I became confused on what Structuring Element I would use. The general principle makes sense, but I wasn't quite sure about the SE stuff. If I were to use flood fill, would I simply use it multiple times to identify which shape is the largest? That is, how would I know where to begin searching? – JT Cho Jul 13 '12 at 14:18
  • FloodFill seems to be a good approach to me. Even if you use connected component labeling you'll have to decide which blob to use. There's an OpenCV example here: http://areshopencv.blogspot.com/2011/12/blob-detection-connected-component-pure.html. You can then get OpenCV to calculate the moments for you: http://opencv.willowgarage.com/documentation/cpp/structural_analysis_and_shape_descriptors.html – beaker Jul 16 '12 at 18:37
  • Thanks for your help! I ended up using a connected component labeling algorithm since it's better suited for what I need, however. Thanks for the OpenCV API link, beaker... although I already wrote all the code for the moments myself beforehand. Whoops. – JT Cho Jul 17 '12 at 16:15
0

How about, in pseudo code:

for( y = 0; y < rows; y++ )
{    
   for ( x = 0; x < cols; x++ )
   {
       if ( pixel( x, y ).isColor() )
       {
          int sum = 0;
          // forgetting about edge cases for clarity...
          if ( !pixel( x-1, y-1 ).isColor() ) sum++;
          if ( !pixel( x,   y-1 ).isColor() ) sum++;
          if ( !pixel( x+1, y-1 ).isColor() ) sum++;
          if ( !pixel( x-1, y   ).isColor() ) sum++;
          if ( !pixel( x+1, y   ).isColor() ) sum++;
          if ( !pixel( x-1, y+1 ).isColor() ) sum++;
          if ( !pixel( x,   y+1 ).isColor() ) sum++;
          if ( !pixel( x+1, y+1 ).isColor() ) sum++;
          if ( sum >= 7 )
          {
             pixel( x, y ).setBackground();
             x -= 1;
             y -= 1;
          }
       }
   }
}

That is remove any pixel surrounded by 7 background pixels. If you change the color of a pixel back up to the earliest pixel which may now be affected.

Your measure of "outlier" could change - e.g. you could count diagonal pixels as being worth 1/2 as much. E.g. pixels directly above, below, left and right count for 2. And then use a different number for the sum.

You can increase accuracy by increasing the size of the filter - like to a 5x5 instead of a 3x3. In that case pixels 2 away should count for even less.

Rafael Baptista
  • 11,181
  • 5
  • 39
  • 59