0

I am dealing with an issue while using my model to predict masks on MRI images. The thing is, I have two images which have different dimensions. My goal is to find out how different the mask is. However because my model only takes (256,256) images, I have to resize them. During the resizing process the organ gets very dissimilar in both images because the original dimensions were different. Is there any image processing technique there using which I can resize both my input images in a way their content remains as before.

1 Answers1

0

You could also CenterCrop (https://pytorch.org/vision/stable/generated/torchvision.transforms.CenterCrop.html) your images. Especially if their dimensions are already close to your desired dimension, you won't lose a lot of information, and mostly you are interested in information of the center of your image anyway.

import torchvision.transforms.functional as F    
img = F.center_crop(img, crop_size)
Marvin
  • 11
  • 2