1

I have some 3D Niftii datasets of brain MRI scans (FLAIR, T1, T2,..). The FLAIR scans for example are 144x512x512 with Voxel Size of 1.1, 0.5, 0.5 and I want to have 2D-slices from axial, coronal and sagittal view, which I use as input for my CNN.

What I want to do: Read in .nii files with nibabel, save them as Numpy array and store the slices from axial, coronal and sagittal as 2D-PNGs.

What I tried:

-Use med2image python library

-wrote own python script with nibabel, Numpy and image

PROBLEM: The axial and coronal pictures are somehow stretched in one direction. Sagittal works out like it should.

I tried to debug the python script and used Matplotlib to show the array, that I get, after

image = nibabel.load(inputfile)
image_array=image.get_fdata()

by using for example:

plt.imshow(image_array[:,:, 250])
plt.show()

and found out, the data is already stretched there.

I could figure out to get the desired output with

header = image.header
sX=header['pixdim'][1]
sY=header['pixdim'][2]
sZ=header['pixdim'][3]
plt.imshow(image_array[:,:, 250],aspect=sX/sZ)

But how can I apply something like "aspect", when saving my image? Or is there a possibility to already load the .nii file with parameters like that, to have data, that I can work with?

It looks like, the pixel dimensions are not taken care of, when nibabel loads the .nii image. But unfortunately there's no way for me to solve this problem..

Nanex1011
  • 39
  • 5
  • can you please post a sample nifti images for the axial and coronal images? [minimal-reproducible-example](https://stackoverflow.com/help/minimal-reproducible-example) – Bilal Dec 26 '20 at 17:08
  • I don't think, I'm allowed to post sample images, since they are from ISBI 2015 MS Lesion Segmentation challenge or from MICCAI 2016 lesion segmentation challenge for example. The MICCAI2008 dataset works perfectly fine, since it is 512x512x512.. – Nanex1011 Jan 01 '21 at 18:36
  • you can try some apps like 3D slicer, if the application shows the images correctly, then your code isn't correct, so you have to check your images dimensions, and which axis is for slice number, frame number etc, and which for rows, columns of the image, otherwise there might be some issues in the images itself. – Bilal Jan 01 '21 at 18:39

1 Answers1

0

I found out, it doesn't make a difference for training my ML Model, if the pictures are stretched, or not, since I also do this in Data augmentation. Opening the nifty volumes in Slicer or MRICroGL showed the volumes, as expected, since these programs also take the Header into account. And also the predictions were perfectly fine (even though, the pictures were "stretched", when saved slice-wise somehow)

Still, it annoyed me, to look at stretched pictures and I just implemented some resizing with cv2:

def saveSlice(img, fname, path):
    img=numpy.uint8(img*255)
    fout=os.path.join(path, f'{fname}.png')
    img = cv2.resize(img, dsize=(IMAGE_WIDTH, IMAGE_HEIGTH), interpolation=cv2.INTER_LINEAR)
    cv2.imwrite(fout, img)
    print(f'[+] Slice saved: {fout}', end='\r')

The results are really good and it works pretty well for me.

Nanex1011
  • 39
  • 5