0

I have a Z-stack of 2D confocal microscopy images (2D slices) and I want to segment cells. The Z-stack of 2D images is actually a 3D data. In different slices along the Z-axis, I see same cells do appear in multiple slices. I am interested in cell shape in the XY so I want to preserve the largest cell area from different Z-axis slices. I thought to combine the consecutive 2D slices after converting them to labelled binary images but I am having few issues and I need some help to proceed further.

I have two images img_a and img_b. I first converted them to binary images using OTSU, then applied some morphological operations and then used cv2.connectedComponentsWithStats() to obtain labelled objects. After labeling images, I combined them using cv2.bitwise_or() but it messes up with the labels. You can see this in the attached processed image (cell higlighted by red circles). I see multiple labels for overlapping cell. However, I want to assign one unique label for every combined overlapping object.

What I want at the end is that when I combine two labelled images, I want to assign one single label (a unique value) to the combined overlapping objects and keep the largest cell area by combining both images. Does anyone know how to do it?

Here is the code:

from matplotlib import pyplot as plt
from skimage import io, color, measure
from skimage.util import img_as_ubyte
from skimage.segmentation import clear_border
import cv2
import numpy as np

cells_a=img_a[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_a, thresh_a = cv2.threshold(cells_a, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)

# Morphological operations to remove small noise - opening
kernel = np.ones((3,3),np.uint8)
opening_a = cv2.morphologyEx(thresh_a,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_a = clear_border(opening_a) #Remove edge touchingpixels

numlabels_a, labels_a, stats_a, centroids_a = cv2.connectedComponentsWithStats(opening_a)
img_a1 = color.label2rgb(labels_a, bg_label=0)

## now do the same with image_b
cells_b=img_b[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_b, thresh_b = cv2.threshold(cells_b, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)

# Morphological operations to remove small noise - opening
opening_b = cv2.morphologyEx(thresh_b,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_b = clear_border(opening_b) #Remove edge touchingpixels

numlabels_b, labels_b, stats_b, centroids_b = cv2.connectedComponentsWithStats(opening_b)
img_b1 = color.label2rgb(labels_b, bg_label=0)

## Now combined two images
combined = cv2.bitwise_or(labels_a, labels_b) ## combined both labelled images to get maximum area per cell
combined_img = color.label2rgb(combined, bg_label=0)
plt.imshow(combined_img)

Images can be found here:

HT121
  • 431
  • 2
  • 6
  • 14
  • so... find blobs that overlap, note that the labels of both blobs are equivalent, and then reassign labels? I didn't see a question mark in your question. It helps to have a question. – Christoph Rackwitz Jan 23 '22 at 13:55
  • Yes find blobs that overlap but also keep the other blobs which do not overlap in both images. Both blobs can have same labels by chance or different as number of cells in different images is different. For overlapping blobs, I want to preserve all the pixels that appear in both images and reassign a label. The merged blobs (by combining overlapping cells) should be then seen as single objects so that I can do further analysis. Do you know how to do that? – HT121 Jan 23 '22 at 14:15
  • 1
    I might be missing something here, but why not combine the two binary images and *then* label the connected components? Labeling for the individual images can be found by using the binary image as a mask on the processed image. – beaker Jan 23 '22 at 15:13
  • If i combine first and do the segmentation later, it works. But there is a problem. I have many images in the z-stack and if i combine them first and try to segment afterwards, it becomes almost impossible because there might be few different cells around the same xy coordinates in different images in the z-stack. Thats why i thought that i will segment them first and label them, then combine the overlapping. If between the slices, there is no overlap and then another cells comes at the same location, then it is a different cell and i do not want to merge different cells. – HT121 Jan 23 '22 at 15:27
  • take the `np.maximum()` of both (grayscale/green) pictures to combine them, then threshold and CC. the two sample pictures would combine nicely like that. I don't see light blobs moving or anything. they only vary in brightness between pictures. – Christoph Rackwitz Jan 23 '22 at 15:34
  • if you _really_ need to mess around with re-labeling, that'll be a bit messy. you'd need to calculate the intersection of both masks, and in those pixels collect the label from both pictures and then collect those tuples for all the overlapping pixels. that'd take some numpy operations. nothing trivial but not too extravagant either. I'm just hoping you can use a different approach that prevents the problem entirely (see probing questions above). – Christoph Rackwitz Jan 23 '22 at 15:36
  • Only cells that need to be merged are the ones that appear as overlap in the consecutive images. But if another cell comes at the same location after few slices and there was no overlap in the previous slice, then new cell will be kept as separate entity. I find it a bit tricky. Any suggestion how to achieve this? – HT121 Jan 23 '22 at 15:37
  • I can't quite follow. please show an example of that situation. -- do you mean to say that this is **3D (voxel) data**? that is important to know and changes the entire premise! then your problem isn't combining slices, it's striking the word "slice" out of your mind and considering the whole volume of data, and segmenting that, not individual slices. – Christoph Rackwitz Jan 23 '22 at 15:48
  • Ooh sorry if it was clear. Yes it is kind of 3D data. I have a z-stack of 2D images where every image is taken at a certain z-axis depth. I am referring to these 2D images as slices. Sorry if it was not clear. Yes i want to segment the whole 3D data (comprised of 2D images) and then want to look at cells elongation. Do you have some suggestions? Do you think that the way I am trying to approach this is useless and it should be done differently? – HT121 Jan 23 '22 at 16:01
  • Since I am only interested in the cell elongation in the xy plane, i thought to achieve it this way by getting rid or merging of overlapping cells in the censecutive slices. I will try to re-formulate the question and include details about 3D data etc. – HT121 Jan 23 '22 at 16:19
  • It sounds like you're looking for 3d connected component labeling. I think MATLAB and scikit already implement this, but I don't think OpenCV does. You could modify the 2-pass connected component labeling algorithm to work in 3d, but it would take some work. – beaker Jan 23 '22 at 16:24
  • Ok great. I will try to search 3d connected componnet labeling in scikit. – HT121 Jan 23 '22 at 16:26

1 Answers1

0

Based on the comments from Christoph Rackwitz and beaker, I started to look around for 3D connected components labeling. I found one python library that can handle such things and I installed it and give it a try. It seems to be doing pretty good. It does assign labels in each slice and keeps the labels same for the same cells in different slices. This is exactly what I wanted.

Here is the link to the library that I used to label objects in 3D. https://pypi.org/project/connected-components-3d/

HT121
  • 431
  • 2
  • 6
  • 14