I have a set of 25 images of label 'Infected' and 25 images of label 'Normal'. I am trying to extract the dual-tree complex wavelet transform based coefficients as features for each of the images.
My code to obtain coefficients using DT-CWT ia as follows:I = imread('infected_img1.jpg'); %read image
I = rgb2gray(I); %rgb ro gray-scale
L = 6; %no. of levels for wavelet decomposition
I = reshape(I',1,size(I,1)*size(I,2)); %change into a vector
I = [I,zeros(1,2^L - rem(length(I),2^L))]; %pad zeros to make dim(I) a multiple of 2^L
I = double(I);
dt = dddtree('cplxdt',I,L,'dtf3'); %perform DT-CWT
dt_Coeffs = (dt.cfs{L}(:,:,1) + 1i*dt.cfs{L}(:,:,2)); %extract coefficents at Level 6
Now, since I have 24 more images to extract coefficients from, I do this block for each of the images. My ultimate aim is to append all coefficient vectors generated in each iteration to form a matrix. But each image tends to give a different sized coefficient vector.
I want to know about some dimension reduction method that can reduce each vector to a uniform size and at the same time preserve its information. Can anyone suggest methods with a good amount of clarity?