So I was going through a research paper [link], which has the following Algo for getting the texture of the image.
So what i understand from the above algo is that I need make grids of 4x4 and find the GLCM matrices for each of those 4x4 sub matrices then calculate the properties.
But my issue with this method is that the image is of size 256x384 which gives 64*96 grids and calculating a GLCM matrix 64*96 times is extremly computation heavy, eapecially because i have 900 such images.
The code is as follows:
def texture_extract(img):
distance = [1]
angles = [0, np.pi/4, np.pi/2, 3*np.pi/4]
properties = ['correlation', 'homogeneity', 'contrast', 'energy', 'dissimilarity']
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray_img, (5,5), 0)
texture_features = []
for i in range(0, blur.shape[0], 4):
for j in range(0, blur.shape[1], 4):
block = blur[i:i+4, j:j+4]
glcm_mat = greycomatrix(block, distances=distance, angles=angles, symmetric=True, normed=True)
block_glcm = np.hstack([greycoprops(glcm_mat, props).ravel() for props in properties])
texture_features.append(block_glcm)
return np.concatenate(texture_features)
All I want to know is that, is my understanding of the algorithm correct or am i making a stupid mistake somewhere.