-1

I have been calculating the entropy of an image with a pixel by pixel convolution operation, and it has been working but very slowly, increasing the execution time with the kernel size.

Here is my function code, where first in the code I read an image with gdal and transform it into an array to pass it to the function.

@jit
def convolution (ArrayES, ImArray, rows, cols, kernel, option):
    for row in prange(rows):
        for col in prange(cols):
            Lx=max(0,col-kernel+1)
            Ux=min(cols,col+kernel+1)
            Ly=max(0,row-kernel+1)
            Uy=min(rows,row+kernel+1)
            mask=ImArray[Ly:Uy,Lx:Ux].flatten()
            He=0.0
            lenVet=mask.size
            horList=list(set(mask))
            if len(horList)==1 and horList.count(0)==1:
                ArrayES[row,col]=0.0
            else:
                T7=time.time()
                prob=[(mask[mask==i]).size/(lenVet*1.0) for i in horList]
                for p in prob:
                    if p>0:
                        He += -1.0*p*np.log2(p)
                if option==0:
                    ArrayES[row,col]=He
                N=len(horList)*1.0
                if N == 1:
                    C=0
                else:
                    Hmax=np.log2(N)
                    C=He/Hmax
                if option==1:
                    ArrayES[row,col]=C
                if option==2:
                    SDL=(1-C)*C
                    ArrayES[row,col]=SDL
                if option==3:
                    D = 0.0
                    for p in prob:
                        D += (p-(1/N))**2
                    LMC=D*C
                    ArrayES[row,col]=LMC
    return ArrayES

The problem is when the number of kernel is >7. How can I improve it?

martineau
  • 119,623
  • 25
  • 170
  • 301
  • It isn't real clear what you are trying to do here, and this should probably be in the "algorithms" tag. `spicy` has an image convolution function that might be helpful https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.filters.convolve.html – AirSquid Apr 04 '20 at 20:26

1 Answers1

0

Similar to matlab, the key to speed up operations like this is called "vectorization". basically, removing for-loop and converting your calculations into vector and matrix operations - for each of the step, find a way to group all qualified pixels and operate on them using one call.

read this for more details

https://www.geeksforgeeks.org/vectorization-in-python/

many of the approaches are similar to vectorization in matlab

https://www.mathworks.com/help/matlab/matlab_prog/vectorization.html https://blogs.mathworks.com/videos/2014/06/04/vectorizing-code-in-matlab/

FangQ
  • 1,444
  • 10
  • 18