I trained a U-net with inputs of satellite images of 120 X 120.
I need to apply my model to a bigger image (size 10980 X 10980). What I tried to do was slice the bigger images into slices of 120 X120 classify these and assemble them into a new image.
My question is: is this approach viable since I can see discontinuity in my output image below?
PS: I saw this question semantic segmentation for large images a user said it's doable, if so is there any way to make the borders more continuous?