I am dealing with an issue while using my model to predict masks on MRI images. The thing is, I have two images which have different dimensions. My goal is to find out how different the mask is. However because my model only takes (256,256) images, I have to resize them. During the resizing process the organ gets very dissimilar in both images because the original dimensions were different. Is there any image processing technique there using which I can resize both my input images in a way their content remains as before.
Asked
Active
Viewed 71 times
0
-
1Please share sample input images and how you want them to look like! [ask] – Markus Aug 04 '22 at 06:54
1 Answers
0
You could also CenterCrop (https://pytorch.org/vision/stable/generated/torchvision.transforms.CenterCrop.html) your images. Especially if their dimensions are already close to your desired dimension, you won't lose a lot of information, and mostly you are interested in information of the center of your image anyway.
import torchvision.transforms.functional as F
img = F.center_crop(img, crop_size)

Marvin
- 11
- 2
-
Please consider providing some sample source code to support your explanation for better understanding. – Azhar Khan Nov 08 '22 at 14:29