0

I can train the deeplabv3+ model on my dataset and it gives good results. But my problem with it is that the model infers pixels with very high probability.

The model treats black (does not belong to a class) and white (definitely belongs to a class) with pixels.

For example, when it wants to assign a probability value to a pixel that it is sure pixel belongs to a person, it assigns it 0.99, but when it encounters a pixel, say a blurred pixel from a person's hand and it is not sure of the class of that pixel it assigns 0.04 probability of being human or lower values ​​for that pixel where my expectation is assigning a value around 0.4 or 0.6.

It is important for me to get such values.

I've tried weighting the critical parts of my dataset (like hands, ..) to make it harder for the model to learn shapes, but the problem persists. I know that my model is not over−fitted as the official definition (because the mIOU of my model on the test dataset is low as expected) but it works marginalized as describes earlier.

Any ideas would be appreciated.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • I’m voting to close this question because it is not about programming as defined in the [help] but about ML theory and/or methodology - please see the intro and NOTE in https://stackoverflow.com/tags/machine-learning/info – desertnaut Aug 02 '22 at 08:24
  • Your model may be saturated. Google "saturated deep learning model". – Mohammad Shokouhi Gol Aug 02 '22 at 11:22

0 Answers0