4

I am using pre-trained GoogLeNet and then fine tuned it on my dataset for classifying 11 classes. I tried the following configurations with different base_learning rate, but the accuracy is not improving further.

  1. I used pre-trained GoogLeNet model and then doing the fine-tuning on last 10 layers and on the first 3 layers with the base learning rate 0.01 and maximum iterations to 50K, but this configuration doesn't give the accuracy better than 75%.

  2. I used pre-trained GoogLeNet model and then doing the fine-tuning on last 2 layers with the base learning rate 0.01 and maximum iterations to 50K, but this configuration doesn't give the accuracy better than 71%.

  3. I used pre-trained GoogLeNet model and then doing the fine-tuning on last 6 layers with the base learning rate 0.001 and maximum iterations to 50K, but this configuration doesn't give the accuracy better than 85%.

Can anybody tell me, what are the other methods or parameters which I can change to improve the accuracy?

Ashutosh Singla
  • 743
  • 4
  • 13
  • 34
fkeufss
  • 85
  • 4
  • How about changing the dataset? – Ashutosh Singla Jun 08 '16 at 11:44
  • 3
    First of all, are you correctly performing model selection + performance estimation? Just trying different hyperparameters and selecting the one with the highest performance leads to overfitting. Second, as already mentioned by @AshutoshSingla, it may not be possible to get a better performance with the specific dataset. – George Jun 08 '16 at 15:28
  • Have you been babysitting your learning process? Maybe you should stop learning earlier? Have you tried different optimizer - like RMSPROP or ADAM? – Marcin Możejko Jun 08 '16 at 20:50

1 Answers1

3

You can use other optimisers such as ADADELTA, ADAM, and RMSPROP. In your solver.prototxt you can set this parameter by writing this command type: "RMSProp"

For RMSPROP, you can modify the parameters as mentioned here.

Ashutosh Singla
  • 743
  • 4
  • 13
  • 34