0

I am trying to implement an image search engine using AlexNethttps://github.com/akrizhevsky/cuda-convnet2

The idea is to implement an image search engine by training a neural net to classify images and then using the code from the net's last hidden layer as a similarity measure.

I am trying to figure out how to train the CNN on a new set of images to classify them. Does anyone know how to get started with this?

Thanks

  • Have you tried the [Training Example](https://github.com/akrizhevsky/cuda-convnet2/blob/wiki/TrainingExample.md)? Is there a specific step you are struggling with? – jayms Apr 24 '16 at 16:41

1 Answers1

0

You basically have two approaches to your problem:

-Either you have plenty of good training data (>1M) and dozens of GPUs and you retrain the network from scratch using SGD with the classes you have for your queries.

-Either you don't and then you simply truncate a pretrained AlexNet (where exactly you truncate it is for you to choose) and plug it to your images (possibly resized to fit the network (227x227x3 if I am not mistaken)). Then from your image you get a feature vector (sometimes called a descriptor) and you use those feature vectors to train a linear SVM on your images and your specific task.

jeandut
  • 2,471
  • 4
  • 29
  • 56