0

I am trying to build a Convolutional Neural network (at moment using RestNet architecture) to classify mammogram images by density (the classes go from A, very smoothly low density to D, very homogeneous dense ).

It works well on the middle ranges, but I am facing a problem with extremely dense and extreme low density (A and D). The problem is, some very very dense mammograms are breasts whose image is completely filled by the dense material, therefore they present a low internal contrast (i.e no edges)... while the very low density mammograms present also a very low internal contrast ( complete lack of such material). For a human is very easy to differentiate them, as one image is very white while the other is very dark, but convolutional neural networks care for edges only so they are easily confused.

I wanted to know if there are some techniques that I could use to try to improve the CNN performance to differentiate such cases (2 textures mostly smooth but with very different base values).

I tried to just use a non AI decision making after the CNN made its analysis and measure the average density of the mammogram, but it felt very crude and I hoped to find something directly usable with the CNN.

  • CNNs should be good in doing what you need. Probably the problem is in the preprocessing. Did you use some form of normalization that makes uniform images collapse to the same value? – marco romelli Jul 21 '23 at 13:04

0 Answers0