2

In the following greyscale image,I am trying to identify the objects that I have manually labelled in red. Does anyone have suggestions of how to do this?

image image2

I have attempted to use a gaussian blur and thresholding but cannot exclusively identify these particles. Would edge detection be a good approach? Any suggestions welcome.

William Grimes
  • 675
  • 2
  • 11
  • 29
  • You have some good filter in opencv, with various thresholding technics that could help. A lot of documentation with examples too. I suppose there isn't any way to improve the image itself, by changing the exposure/lights ? – CoMartel Jun 10 '15 at 13:06
  • Also, how many images is there? And what are you trying to achieve: counting the objects, getting the coordinate? – CoMartel Jun 10 '15 at 13:09
  • I think a Hough transform might help you locate lines and or the ellipses (they seem to me rather elliptical, at least). – Jblasco Jun 10 '15 at 13:13
  • Thank you for the comments. Certainly I will take a look at what OpenCV has to offer. There is not any way to improve teh image itself at this stage. This is one frame from a video of 1000+ frames, I am trying to count the objects and get the coordinates, then will link objects between frames and get information about trajectories. Hough transforms does not give a very good result for this purpose – William Grimes Jun 10 '15 at 13:41
  • @WillyWonka1964; If I were you I'd use `findContours` and then fit `minAreaRect` on them. If your images have only the long line vs. the blobs you should easily be able to use the gotten rectangles side lengths to check if their ratio is within certain range. For lines that longer/shorter ratio will be a large number while for blobbly objects the ratio will be closer to 1. I'd also recomend that you `dilate` and/or `erode` your image to get rid of the noise and get the blobs to combine in a single object, and try to avoid fitting several smaller rectangles for each blob. – ljetibo Jun 10 '15 at 14:14

4 Answers4

4

Your images look like a suitable target for machine learning.

Probability out put of Weka Trainable segmentation

You should be able to get your objects by simple thresholding and possibly some subsequent size filtering.

  • You might also want to try ilastik, a free software that combines pixel classification and object tracking with all the power of machine learning.

Edit:

This is how I proceeded using Trainable Weka segmentation:

  1. In the settings window, I activated some more features, set the sigma range from 4 to 32, and named the classes "Objects" and "Background":

Settings for the Trainable Segmentation plugin

  1. I then created some freehand line traces, added them to the respective classes, and clicked on "Train classifier". (The creation of the feature stack takes a while the first time you run the training, but refining the classification takes less time because only the classification need to be run.)

Segmentation preview

  1. To get the probability map, click on "Get probability".
Jan Eglinger
  • 3,995
  • 1
  • 25
  • 51
  • Thanks, that is an interesting approach. I'm not very experienced with using Weka, am I right in thinking you defined two classes, for 'streaks' and 'coffee beans'. then built a probability map. I did that with 4 classifiers for each and did not achieve such a good result. How exactly did you get the above result and how many particles did you classify in order to get this? – William Grimes Jun 11 '15 at 09:34
  • 1
    Yes, I used two classes and the default settings of the trainable segmentation plugin. Then I drew a few zig-zag lines over the 'coffee beans', added them to class 1, drew another few lines over both streaks and background and added them to class 2. Then trained the classifier and refined the prediction on places where it wasn't performing as desired (mainly on the streaks). After few cycles of refinement and training, I clicked on 'Get probability'. (You might want to try different features as well in the settings.) I will edit my answer with more details when I find the time. – Jan Eglinger Jun 11 '15 at 13:49
  • 1
    I refined the segmentation by adding more features in the settings, and I added some explanations to my answer. – Jan Eglinger Jun 12 '15 at 19:57
1

I had a quick go at this just from the command-line using ImageMagick. I am sure it could be improved upon by looking at the squareness of the detected blobs, but I don't have infinite time available and you said any ideas are welcome...

First, I thresholded the image, and then I replaced each pixel by the maximum pixel in the horizontal row looking 6 pixels left and right - this was to join the 2 halves of each of your coffee bean shapes together. The command is this:

convert https://i.stack.imgur.com/mr0OM.jpg -threshold 80% -statistic maximum 13x1 w.jpg

and it looks like this:

enter image description here

I then added on a Connected Components Analysis on it to find the blobs, like this:

convert https://i.stack.imgur.com/mr0OM.jpg            \
      -threshold 80% -statistic maximum 13x1          \
      -define connected-components:verbose=true       \
      -define connected-components:area-threshold=500 \
      -connected-components 8 -auto-level output.png

Objects (id: bounding-box centroid area mean-color):
  0: 1280x1024+0+0 642.2,509.7 1270483 srgb(4,4,4)
  151: 30x303+137+712 152.0,863.7 5669 srgb(255,255,255)
  185: 29x124+410+852 421.2,913.2 2281 srgb(255,255,255)
  43: 48x48+445+247 467.9,271.5 1742 srgb(255,255,255)
  35: 21x94+234+214 243.7,259.2 1605 srgb(255,255,255)
  10: 52x49+183+31 209.9,56.2 1601 srgb(255,255,255)
  30: 31x86+504+176 523.1,217.2 1454 srgb(255,255,255)
  171: 61x39+820+805 856.0,825.7 1294 srgb(255,255,255)
  119: 20x78+1212+625 1221.6,664.3 1277 srgb(255,255,255)
  17: 44x40+587+106 608.3,124.9 1267 srgb(255,255,255)
  94: 19x70+1077+545 1086.1,580.6 1100 srgb(255,255,255)
  59: 43x33+947+329 967.4,344.3 1092 srgb(255,255,255)
  40: 39x32+735+235 754.4,251.0 1074 srgb(255,255,255)
  91: 22x62+1258+540 1268.3,571.0 1045 srgb(255,255,255)
  18: 23x50+197+124 207.1,148.1 996 srgb(255,255,255)
  28: 40x28+956+165 976.8,177.7 970 srgb(255,255,255)
  76: 22x55+865+467 875.6,493.8 955 srgb(255,255,255)
  187: 18x59+236+858 244.4,886.4 928 srgb(255,255,255)
  211: 46x27+720+997 743.8,1009.0 891 srgb(255,255,255)
  206: 19x47+418+977 427.5,1000.5 804 srgb(255,255,255)
  57: 21x44+231+313 241.4,335.5 769 srgb(255,255,255)
  97: 20x45+1215+553 1224.3,574.3 766 srgb(255,255,255)
  52: 19x47+516+293 525.4,316.2 752 srgb(255,255,255)
  129: 20x41+18+645 28.2,665.1 746 srgb(255,255,255)
  83: 21x45+1079+497 1088.1,518.9 746 srgb(255,255,255)
  84: 17x44+636+514 644.0,535.7 704 srgb(255,255,255)
  62: 19x43+514+348 523.3,369.3 704 srgb(255,255,255)
  201: 19x42+233+951 242.3,971.8 675 srgb(255,255,255)
  134: 21x39+875+659 884.3,676.9 667 srgb(255,255,255)
  194: 25x32+498+910 509.5,924.6 625 srgb(255,255,255)
  78: 19x38+459+483 467.8,501.8 622 srgb(255,255,255)
  100: 20x37+21+572 30.6,589.4 615 srgb(255,255,255)
  53: 18x37+702+296 710.5,314.5 588 srgb(255,255,255)
  154: 18x37+1182+723 1191.2,741.3 566 srgb(255,255,255)
  181: 47x18+808+842 827.6,850.4 565 srgb(255,255,255)
  80: 19x33+525+486 534.2,501.9 544 srgb(255,255,255)
  85: 17x34+611+517 618.9,533.4 527 srgb(255,255,255)
  203: 21x31+51+960 60.5,974.6 508 srgb(255,255,255)
  177: 19x30+692+827 700.7,841.5 503 srgb(255,255,255)

which shows me all the blobs it found, their boundaries and centroids. I then had ImageMagick draw the detected boxes onto your image as follows:

enter image description here

Just to explain the output, each line represents a blob. Let's look at the second line, which is:

  151: 30x303+137+712 152.0,863.7 5669 srgb(255,255,255)

This means the blob is 30 pixels wide by 303 pixels tall and it is located 137 pixels from the left side of the image and 712 pixels down from the top. So it is basically the tallest green box at the bottom left of the image. 152,863 are the x,y coordinates of its centroid, its area is 5,669 pixels and its colour is white.

As I said, it can be improved upon, probably by looking at the ratios of the sides of the blobs to find squareness, but it may give you some ideas. By the way, can you say what the blobs are?

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • Many thanks Mark, I had not realised that ImageMagick was so powerful, will certainly have to look into using it more in the future. By the way I had a look at your website, and love your photography, I actually went to school at Pate's so recognise quite a few of your locations. Keep up the good work – William Grimes Jun 10 '15 at 18:12
1

I gave the following alg. in a comment on the OP's question but it's a short snippet so why not give the written answer for opencv with python as well.

Hopefully this is a bit more extensible than Mark Setchell's answer and more on point with the OP's tags.

import cv2
import numpy as np

img = cv2.imread("a.jpg", cv2.IMREAD_GRAYSCALE)

ret,thresh = cv2.threshold(img,127,255,0)
contours,hierarchy = cv2.findContours(thresh, 1, 2)

#color image for testing purposes
color = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)

for cnt in contours:
    x,y,w,h = cv2.boundingRect(cnt)
    if w>h: delta = w/h
    else: delta = h/w
    if delta<4 and w>10 and h>10:
        cv2.rectangle(color,(x,y),(x+w,y+h),(0,0,255),2)

cv2.imwrite("c.jpg", color)

It provided 6-7 extra, small objects that are false detections. However this can easily be improved by using erode and dilate functions as I mentioned, as well as perhaps switching treshold for a canny edge detection algorithm. False detections can be sorted out by asking for a longer rectangle width and length.

enter image description here

Updated code

Just to show off some extra options you can play with.

import cv2
import numpy as np

img = cv2.imread("a.jpg", cv2.IMREAD_GRAYSCALE)

img[np.where(img<100)] = 0 #set all pixels with intensities bellow 100 to 0
img[(img>100) & (img<244)] += 10 #same as above, set all other pixels>100 and smaller than 254 (when you add 10) to be more "white" than before, exaggerating objects
img = cv2.equalizeHist(img) #just for good measure I suppose?

#matrices filled with '1' everywhere, different dimensions
erode_kernel = np.ones((4,4))
dilate_kernel = np.ones((9,9))
small_dilate_kernel = np.ones((2,2))

erode = cv2.erode(img, erode_kernel)
dilate = cv2.dilate(erode, dilate_kernel)

canny = cv2.Canny(dilate, 180, 255) #if pixel value is not in range 180-255 it is not considered for edge detection
canny = cv2.dilate(canny, small_dilate_kernel) #just to combine close edges to make them appear as a single edge, might be a bad idea
contours,hierarchy = cv2.findContours(canny, cv2.RETR_EXTERNAL, #retr_exernal ignores all inside-object features and returns just the outside-most contours
                                      cv2.CHAIN_APPROX_NONE)

a = np.zeros(img.shape) #test image to see what happened so far
cv2.drawContours(a, contours, -1, (255,255,255), 1)
cv2.imwrite("contours.jpg", a)

#color image for testing purposes
color = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
for cnt in contours:
    x,y,w,h = cv2.boundingRect(cnt)
    if w>h: delta = w/h
    else: delta = h/w
    if delta<4 and w>20 and h>20:
        cv2.rectangle(color,(x,y),(x+w,y+h),(0,0,255),2)

cv2.imwrite("c.jpg", color)

Good luck, comp. vision takes a lot of fiddling to get it to work the way you want it. Take your time and read the manuals. Overlapping things will be hard to set appart as we're trying our best right now to combine them together as a single object.

If you could get the coffe beans to appear as circles, you could try using HoughCircle detection. But seeing how they're fairly irregular I'm not quite sure that's the best way to go. Training your own haar cascade might be your best bet, but I've never done it before so I can't help much in that aspect.

ljetibo
  • 3,048
  • 19
  • 25
  • Huge kudos, for this! That is an elegant solution and similar to a result I arrived at with ImageJ (although I prefer your Python implementation). In later frames of the video the white streaks sometimes overlay the 'coffee bean' like particles, which could be problematic. I will post another image as an example. However this is a great start point. Other than approaches using thresholding, could some kind of signatures for the 'coffee bean' like particles be used to search for them in the image? I'm not sure how best to describe, but I'm just wondering what are alternatives to thresholding? – William Grimes Jun 10 '15 at 17:51
  • I will also try a Canny Edge detection to see how that turns out. – William Grimes Jun 10 '15 at 18:01
  • 1
    @WillyWonka1964; Try reading up on all the functions I posted. You can get far with just those, but forget having _excellent_ scores 60-70% is good. Experiment with using Canny detektor in combination with erosion and dilatation (see updated code above). Try using different tree structures for contours operators. See the trick you can do with np arrays to get more contrast (updated code). If you really really want you can try and train your own (haar object classifier)[http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html] if all your beans are very much alike. – ljetibo Jun 10 '15 at 18:52
1

Thanks for all the suggestions. I have investigated many approaches to this problem and the best result I have had has been using opencv with haar like features cascade classification.

I followed this tutorial, and achieved a high degree of accuracy:

http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html

Here is the result for the first two images, with some optimisation of the classifier it could be improved even further:

enter image description here

enter image description here

William Grimes
  • 675
  • 2
  • 11
  • 29