1

I have custom trained deeplabv3 on set of images for detecting human contour. My crop size is 1025 x 513. I am able to train and export the trained model. When I testing I get decent mask outputs when the input images are of size around 1025 x 450. But when change input image width then on some images the outputs improves and gets worse on some. But the variation mostly happen on smaller parts of the input image like fingers, hair Portion of the hands get cropped from the output.

the finger is cropped from the input image

the finger is cropped from the input image When we change the input size of the image the output becomes better When we change the input size of the image the output becomes better

But the cropping is varying randomly.

Vikas
  • 4,263
  • 1
  • 34
  • 39
  • It will give better prediction for the same dimensions that you trained it on. If you want it to predict for smaller dimensions add training data with small dimensions – Ajinkya Apr 30 '19 at 07:20

0 Answers0