I have an image which has already been tiled and then segmented (outside my pipeline). Segmentation was carried out separately on each tile and I have the corresponding label_image
arrays.
These are just arrays the same size as the tiles; The value of each element in the array corresponds to a real life object on the tile. Zero means background, 1 means that the first object etc. For example if we had an image with two objects, one diamond-shaped and one square on the top-left and bottom-right corner respectively, then the label_image
array would look like this:
[0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0,
1, 1, 1, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 2, 2, 2, 0,
0, 0, 0, 0, 2, 2, 2, 0,
0, 0, 0, 0, 2, 2, 2, 0,
0, 0, 0, 0, 0, 0, 0, 0]
In my case, lets say the original image looks like the one shown below at the left but segmentation was done on each of the 4 smaller images shown below on the right with red outlines and I have the four label_image
arrays.
How do I join these four smaller label_images
to get one single label_image
for the big image please?
An object (coin in this example) that sits on the boundaries will appear in two (at least) label_images
in the end result however they should be expressed as one unified unclipped object, hence they should been given the same number.
Apart from the label_images
I also have the centroids and everything else that comes from the regionprops
function.
(image taken from here)