0

Given a 2D array with values ranging from 0 to n, I would like to dilate each pixel by the value it contains, higher values should overwrite lower values during dilation.

That is to say, any pixel within the radius equivalent to a pixel’s value, inherits that value if their value is less. For example, if the input is [0 1 0 2 0 0], the output would be [1 2 2 2 2 2 ].

How could this be implemented?

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
Octavio del Ser
  • 378
  • 1
  • 10

1 Answers1

1

Your verbal description is exactly how you would implement the algorithm: For each pixel (let’s call its value r), examine its neighborhood, with the neighborhood’s size given by r. Within this neighborhood, set all pixels to r if their value was lower. You do need to write to an independent output image, you cannot modify the input without breaking the logic.

This is of course implemented with a set of nested loops: first loop over each pixel in the image, and inside this loop, loop over each pixel in a neighborhood.

Python is slow with loops, so this is not going to be efficient when written in Python. You could try using numba to speed it up. If sufficiently important, write it in a native language like C or C++.


It is possible that, if n is small, this operation could be sped up a bit compared to the obvious implementation, using a threshold decomposition.

The idea is that you loop over the gray-values r in the image. At each iteration, dilate the binary image img==r (which is true for pixels with the value r, false elsewhere) with an SE of size r. Next, compose the final output image by taking the element-wise maximum across the dilated images. Note that you can accumulate this final result stepwise within the loop, taking the max of the previous result and the new dilation.

This implementation does more work, but since you are using whole-image operations, you minimize the number of Python loops, and hence (hopefully) speed up the code.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • Yes its too slow in python, I was thinking of methods to optimise the algorithm, or somehow use numpy or a library that had such functions like opencv – Octavio del Ser Mar 08 '20 at 13:48
  • @Octavio: There’s no standard implementation for this operation. If `n` is small you might use the decomposition theorem to speed up the operation a bit. Let me add that to my answer. – Cris Luengo Mar 08 '20 at 13:52
  • 1
    I've managed to make it work by decomposing the matrix with numpy and then dilating each layer by its given amount, a bit like a topological map, and then recombining all layers prioritising lsrger values. it worked out quite fast thsnks to numpy and opencv. – Octavio del Ser Mar 09 '20 at 01:04
  • 1
    only downside is thst I had to have a threshold for each layer otherwise there would simply be too many layers if the gradient is very smooth. but that's ok as 50-70 layers is still faster than going node by node. – Octavio del Ser Mar 09 '20 at 01:06