2
  • I trained a U-net with inputs of satellite images of 120 X 120.

  • I need to apply my model to a bigger image (size 10980 X 10980). What I tried to do was slice the bigger images into slices of 120 X120 classify these and assemble them into a new image.

  • My question is: is this approach viable since I can see discontinuity in my output image below?

Output image

PS: I saw this question semantic segmentation for large images a user said it's doable, if so is there any way to make the borders more continuous?

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
Gon
  • 51
  • 5

2 Answers2

1

If your model is fully convolutional, you can trivially apply it to larger images. Your only limitation is your device's memory size.

If you have no way but slicing the image, you can still avoid discontinuities, but taking into account your model's receptive field:
If you crop much larger crops - that considers the true size of the receptive field - and keep only the central, "valid", output mask, you should be able to get a smooth and continuous mask.

Shai
  • 111,146
  • 38
  • 238
  • 371
0

I think this library does what you need, using interpolation with a simple second order spline window function:

https://github.com/Vooban/Smoothly-Blend-Image-Patches

It works only if your original image size is not extremely big because of memory constrains.

ferlix
  • 61
  • 1
  • 5