I am trying to use sift for object classification using Normal Bayes Classifier.When I compute the descriptors for each image of variable size i get different sized feature vectors. Eg:
Feature Size: [128 x 39]
Feature Size: [128 x 54]
Feature Size: [128 x 69]
Feature Size: [128 x 64]
Feature Size: [128 x 14]
As for development, i am using 20 training images and therefore i have 20 labels. My classification is only of 3 classes containing car, book and ball. So my label vector size is [1 x 20]
As far as I understand, to perform Machine learning the feature vector size and label vector size should be same so i should get a vector size for training data as [__ x 20] and label is [1 x 20].
But my problem is that sift has 128 dimensional feature space hence and each image has different feature size as i shown above. How do I convert all to same size without losing features? OR perhaps I might be doing it incorrectly so please help me out in this?
PS: actual I have done it using BOW model and it works but just for my learning purposes I am trying to do it in this matter just to learn out of interest so any hint and advise are welcomed. Thank you