0

I am dealing with CT images that contain the head of the patient but also 'shadows' of the metalic cylinder.

enter image description here

These 'shadows' can appear down, left or right. In the image above it appears only on the lower side of the image. In the image below it appears in the left and the right directions. I don't have any prior knowledge of whether there is a shadow of the cylinder in the image. I must somehow detect it and remove it. Then I can proceed to segment out the skull/head.

enter image description here

To create a reproducible example I would like to provide the numpy array (128x128) representing the image but I don't know how to upload it to stackoverflow.

How can I achieve my objective?

I tried segmentation with ndimage and scikit-image but it does not work. I am getting too many segments.


enter image description here


12 Original Images

enter image description here

The 12 Images Binarized

enter image description here

The 12 Images Stripped (with dilation, erosion = 0.1, 0.1)

enter image description here

The images marked with red color can not help create a rectangular mask that will envelop the skull, which is my ultimate objective.

Please note that I will not be able to inspect the images one by one during the application of the algorithm.

halfer
  • 19,824
  • 17
  • 99
  • 186
user8270077
  • 4,621
  • 17
  • 75
  • 140
  • What values do you have for the shield and for the skull? – norok2 Oct 06 '19 at 20:10
  • Close to 1. I will try kmeans with binarized images and take it from there. – user8270077 Oct 07 '19 at 08:17
  • You could make use of dilation and erosion to create a mask that does not contain thin details, like the shield but would still get you to the skull. If you provide some image *without* decorations (axis, grid, etc.), I could show some code to illustrate this idea. – norok2 Oct 07 '19 at 08:26
  • Thank you, I will appreciate it! Please see my updated post. I have binarized the image. Now I need a method to isolate the skull. This should be done without me inspecting the image. As mentioned before there could be other edges present in the image that should be removed as not belonging to the head of the patient. – user8270077 Oct 07 '19 at 08:54
  • see my answer, but note that the proposed approach will remove any thin detail, potentially including those belonging to the head of the subject. – norok2 Oct 07 '19 at 09:29
  • What about using a different (much lower) threshold for the binarization? – norok2 Oct 07 '19 at 10:26
  • If that does not work, you may consider fitting an ellipse to the head and use that as mask – norok2 Oct 07 '19 at 10:28
  • The position of the head and its size are not known a priori. – user8270077 Oct 07 '19 at 10:30
  • that is why you have to use a fitting procedure – norok2 Oct 07 '19 at 10:35
  • 1
    Finally it works by using a lower threshold for binarization. Thank you!!! – user8270077 Oct 07 '19 at 10:45

2 Answers2

2

You could use a combination of erosion (with an appropriate number of iterations) to remove the thin details, followed by dilation (also with an appropriate number of iterations) to restore the non-thin details to approximately the original size.

In code, this would look like:

import io
import requests

import numpy as np
import scipy as sp
import matplotlib as mpl
import PIL as pil

import scipy.ndimage
import matplotlib.pyplot as plt


# : load the data
url = 'https://i.stack.imgur.com/G4cQO.png'
response = requests.get(url)
img = pil.Image.open(io.BytesIO(response.content)).convert('L')
arr = np.array(img)
mask_arr = arr.astype(bool)

# : strip thin objects
struct = None
n_erosion = 6
n_dilation = 7
strip_arr = sp.ndimage.binary_dilation(
    sp.ndimage.binary_erosion(mask_arr, struct, n_erosion),
    struct, n_dilation)

plt.imshow(mask_arr, cmap='gray')
plt.imshow(strip_arr, cmap='gray')
plt.imshow(mask_arr ^ strip_arr, cmap='gray')

Starting from this image (mask_arr):

mask_arr

One would get to this image (strip_arr):

strip_arr

The difference being (mask_arr ^ strip_arr):

xor_arr


EDIT

(addressing the issues raised in the comments)

Using a different input image, for example a binarization of the input with a much lower threshold will help having larger and non-thin details of the head that will not disappear during erosion.

Alternatively, you may get more robust results by fitting an ellipse to the head.

norok2
  • 25,683
  • 4
  • 73
  • 99
  • This is really good, but I need some more help. Please look at the updated post which shows 12 grayscale CT images, converted to 12 binarized images and then to 12 images stripped. – user8270077 Oct 07 '19 at 10:11
  • @user8270077 to be fair, I'd look for the exact object, instead of doing generic image processing. Its a table for the patients head, and in a given hospital/setup, there is either only 1 shape or very limited shapes that can exist, and they can only be in a particular area of the image. I'd start from that, instead of erosion, dilation, etc. – Ander Biguri Oct 09 '19 at 12:00
0

Rather than "pure" image processing, like Ander Biguri above, I'd suggest maybe a different approach (actually two).

The concept here is to not rely on purely algorithmic image processing, but leverage the knowledge of the specifics of the situation you have:

1) Given the container is metal (as you stated) another approach that might be a lot easier is just thresholding, based on the specific HU number for the metal frame.

While you show the images as simple greyscale, in reality CT images are 16-bit images that are window levelled when viewed in a 256bit greyscale representation - so the pictures above are not a true representation of the full information available in the image data, which is actually 16 bit.

The metal frame would likely have a HU value that is significantly different to (higher than) anything within the anatomy. If that is the case, then a simple thresholding then subtraction would be a much simpler way to remove it.

2) Another approach would also be based on considering the geometry and properties of the specific situation you have:

In the images above, you could look at a vertical profile upwards in the middle of your image (column-wise) to find the location of the frame - with the location being the point that vertical profile crosses into a HU value that matches the frame.

From that point, you could use a flood fill approach (eg scikit flood_fill) to find all connected points within a certain tolerance.

That also would give you a set of points (mask) matching the frame that you could use to remove it from the original image.


I'm thinking that either of these approaches would be both faster and more robust for the situation you're proposing.

Richard
  • 3,024
  • 2
  • 17
  • 40