-1

Testing extracted features from InceptionV3 and ResNet50 pretrained models (with keras plus tensorflow) and each gives different (in fact, wildly) different results for simple image similarity.

The extracted features are used as-is and normalized but outcome is the same.

Anyone know why?

Henry Thornton
  • 4,381
  • 9
  • 36
  • 43

1 Answers1

1

Assuming you mean features extracted from the flattened layer after the last convblock this is to be expected since the architecture is different. Therefore the feature space is conceptually different and the features can only be used for similarity checks per model and do not match.

petezurich
  • 9,280
  • 9
  • 43
  • 57
  • Thanks for the clarification. This confirms pretrained models can classify (say the 1000 imagenet classes) nicely but are not really suitable for feature extraction as results are different between models which is a bit like manual feature engineering performed by two separate people using two different techniques. – Henry Thornton Jun 10 '17 at 08:56
  • Can you elaborate why do you want to use different models? With the extracted features from one model you can do image similarity searches or image clustering like this: https://www.flickr.com/photos/genekogan/24873243915 – petezurich Jun 10 '17 at 16:02
  • I don't want to use different models. I'm trying to figure out why a similarity search for image-a with inceptionv3 returns a completely different set of results than a similarity search for image-a with resnet50. Even if the feature spaces are different, why are the results so different for the same image? – Henry Thornton Jun 10 '17 at 16:24
  • Interesting. That indeed does not make sense. In that case it might be helpful if you share some code snippets and describe how exactly you do the comparison. – petezurich Jun 10 '17 at 16:36