I have custom trained deeplabv3 on set of images for detecting human contour. My crop size is 1025 x 513. I am able to train and export the trained model. When I testing I get decent mask outputs when the input images are of size around 1025 x 450. But when change input image width then on some images the outputs improves and gets worse on some. But the variation mostly happen on smaller parts of the input image like fingers, hair Portion of the hands get cropped from the output.
the finger is cropped from the input image
When we change the input size of the image the output becomes better
But the cropping is varying randomly.