0

I recently started using Tensorboard to monitor my machine learning project. I use the Adam optimizer with a decaying learning rate:

early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=4)
lr_schedule = tf.keras.optimizers.schedules.PolynomialDecay(initial_learning_rate=1e-3, decay_steps=10000,end_learning_rate=5e-5,power=0.25)
optimizer = keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(loss='mae', optimizer= optimizer, metrics=['mse'])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs/fit/"+model_id, histogram_freq=1)
history = Model.model.fit([trainX_jets, trainX_other], trainY, verbose=1, epochs=256, validation_data=([valX_jets, valX_other], valY), shuffle=True, callbacks=[early_stop, tensorboard_callback], batch_size=1000)

However, in Tensorboard, the 'epoch_learning_rate' shows a steady value of 5e-5, which is my 'end_learning_rate'; it seems like the polynomial decay isn't happening, and it's just using my final rate, which is not what I want. Constant learning rate

Is my model actually not using the learning rate I want it to, or is Tensorboard just plotting the wrong thing?

I've tried looking this problem up and changing the parameters of the 'PolynomialDecay', but it always shows the final learning rate.

Jenna
  • 1
  • 1

0 Answers0