1

I'm trying to do a segmentation task on the below picture.

orignal image

I'm using fuzzy c-means with some minimal pre-processing. The segmentation will have 3 classes: background(the blue region), meat(the red region) and fat(the white region). The background segmentation works perfectly. However meat-and-fat segmentation on the left side of the photo maps lots of meat tissues as fat. the final meat-mask is like this:

mask

I suspect that's because of lighting conditions which makes the left side brighter so the algorithm classifies that region as fat-class. Also I think there could be some improvements if I could somehow make the surface smoother. I've used a 6x6 median filter which works alright, but I'm open to new suggestions. Any suggestions how to overcome this problem? May be some kind of smoothing? Thanks :)

Edit 1: The fat areas are roughly marked in the below photo. The top area is ambiguous, but as rayryeng has mentioned in the comments, if it is ambiguous for me as a human, it's alright for the algorithm to misclassify it too. But the left hand section is clearly all meat and the algorithm assigns a big chunk of that as fat. rough fat segments

Saam
  • 385
  • 1
  • 3
  • 12
  • Could you show us what the ideal segmentation would look like? I'm having a hard time figuring out what is fat and what is meat from that sample... and if I (i.e. a human) have a hard time figuring that stuff out, then it's probably even harder for an algorithm to determine the same regions. There are obvious areas, like the bottom half of the piece and in the middle, but there are some that are a bit ambiguous. Also, some code to illustrate how you got the above result would be beneficial. It may simply be just a small change in a parameter (or two) of your current algorithm. – rayryeng Jun 08 '15 at 00:27

2 Answers2

2

First rule in segmentation is "try to describe how you (as a human being) were able to do the segmentation". Once you do that the algorithm becomes clear. In other words, you must perform the following 2 tasks:

  1. Identify the unique features of each segmented part (in your case - what is the difference between the fat and the meat).
  2. Choose a classifier that best suits you (C-means, Neural network, SVM, decision tree, Boosted classifier, etc). This classifier will operate on features selected in step 1

It seems that you skipped step 1 and that is the problem of your algorithm.

Here are my observations:

  1. The brightness of a pixel does not differentiate between meat and fat. It is depends mostly on the illumination, angle of the tissue and specular reflection. So you must remove the brightness
  2. It seems that the fat is more "yellow". in other words the ratio of red color / green color is much higher for meat than for fat. That is the feature from which I would start the implementation. In order to grab that feature, convert your RGB image to YUV or HSV color space. If you work with YUV completely discard the Y and run your classifier on the V plane. In HSV run it on the H plane. This way you will discard the brightness and deal only with the colors (mainly the red and green components). I recommend also using those color spaces for background separation.
  3. Next step - you should add more features to your classifier, since color will not be enough. Another observation is that meat is a much more flexible tissue so it will have more wrinkles on it and fat tends to be more smooth. You can search for edges and insert the absolute amount of edges as another feature to your classifier.
  4. Continue observing your results, identify where the classifier made mistakes and try to come up with other new features to separate the two textures better. Example of features which might be very good in your case: HOG, LBP on pyramid of images, MCT features, three patch lbp, (x,y)-projections. My intuition whispers that three-patch-lbp will help you the most but I it is very difficult to explain why.
  5. Personal suggestion: I don't know which features are implemented in Matlab. But you should start from the features that already exists to save time on writing a lot of new code. For example, I know that HAAR features are already implemented in matlab but they might be not descriptive enough by themselves for your case. Combine few types of features to get the strongest result and avoid using overlapping features (Two different features that capture almost identical information in the image). For example - If you use MCT, don't use LBP.

For more information you can read my answer here about textures similarities. You have a reverse problem (instead of measuring similarity you want to train a classifier that distinguish between non similar textures). But the framework of the solution is identical. Identify important features which distinguish between textures, concatenate the features to a vector and run classifier. You can run the classifier on each pixel or on image patches of small area (say 5x5 pixels). The result you are expecting is to train such a smart classifier that for every patch in the image it can tell you if it resembles more a chunk of meat or fat

Community
  • 1
  • 1
DanielHsH
  • 4,287
  • 3
  • 30
  • 36
  • thanks DanielHsH, very insightful reply. I'll try your suggestions. Meanwhile do you have any ideas on reducing the reflection/glare? Do you think it would be worth the try to remove the glare? Thank again :-) – Saam Jun 09 '15 at 09:04
  • 1
    Not sure. On the example image the glares seem to be very localized (covering small areas). So when your classifier will be strong - they should not affect it. You can try to remove them by thresholding very bright pixels or median filtering but I am not sure it will help in the long run. Currently your classifier is very weak so it is easily affected by glares. You should concentrate your effort on improving the classifier strength (by adding features with good separation abilities). Dealing with minor issues like glares might be a distraction. P.s. - I updated my answer to help you more – DanielHsH Jun 09 '15 at 17:40
  • Thanks @DanielHsH, the added section made it much clearer. I'm going to try different features to see how they perform on this specific problem. I'm using an unsupervised learning scheme where each photo is fed to a classifier to label pixels(or regions) as background, meat or fat(I've tried k-means and fuzzy c-means). So when you talk about training classifiers, I assume you are talking about extracting a set of features for each pixel/region and feeding it to the clustering algorithm to assign each pixel/region to the appropriate cluster. Is that right? – Saam Jun 10 '15 at 15:07
  • 1
    Exactly. On each region of NxN pixels you extract some features and concatenate them to vector [feature1, feature2, .... featureK]. Now you feed this vector to a classifier. I don't like the term "clustering algorithm" because it is commonly presumes hard clustering (Meat or Fat). Classifier (like neural network, SVM, etc) gives you soft clustering. Result of classifier can be (87% meat, 13% fat), or (50%-50% in case that the classifier cannot decide). Soft clustering is much better since you can see a smooth heat map where meat and fat region appear and apply a stronger post processing – DanielHsH Jun 10 '15 at 19:19
0

In case that you do not have your output labels, you need to apply an unsupervised learning algorithm for classification. For many images, the human eye is not a perfect tool to do the classification. That is why then we use computers :D Since it can show us the distribution of the intensities and provide different classes. One alternative is using connected components to identify and separate the fat-meat and BG classes since they have totally different intensities except the edges between mean and fat.

You can see the output of my thresholding based segmentation with different parameters. Please let me know if that is what you want so that I can support you with the code. Bests

parameterset 1

parameterset 2

fati
  • 125
  • 5