1

In tensorflow, after I set the trainable flag of each layer to False, attempting to train the network does not change the weights (as expected). However, each epoch still takes the same amount of time (about 12 seconds) to train, just like training without freezing any layers would.

For clarification, I set the trainable flag of each layer to False before compilation.

for i in range(len(model.layers)):
        model.layers[i].trainable = False

Does anyone know why this is happening? My actual intention is to reduce training time of the network by freezing some weights. When freezing some weights did not reduce the training time, I tried to freeze all weights, but even that did not lead to a reduction in training time.

It may be worth mentioning that I am using tensorflow 1.12.0

  • I even tried model.get_layer(index=i).trainable = False. It still takes the same amount of time per epoch to train. – Rehana Mahfuz Jun 22 '20 at 21:40
  • Can you provide time comparisons? with and without `trainable=False`? – thushv89 Jun 22 '20 at 22:57
  • The time taken per epoch remains the same: 12 sec. This remains the same when trainable=False in all layers, trainable=False in some layers, and when trainable=False in none of the layers – Rehana Mahfuz Jun 23 '20 at 01:54
  • You might want to share more of your code, the model / training code /evaluating code, etc. – thushv89 Jun 23 '20 at 02:50
  • 1
    I realized that the absence of reduction in training time after freezing layers only happens while using `model.fit_generator`, and not while using `model.fit` for training. I will have to dig deeper into the source code for `fit_generator` – Rehana Mahfuz Jun 24 '20 at 14:07

0 Answers0