When trying to train multiple models in a row, the fitting of the previous models affects the new models, even if the new models have a different number of layers and neurons. It is a similar problem to The clear_session() method of keras.backend does not clean up the fitting data but none of the solutions seem to work. After the training ends for one model and begins for the new model, the loss picks up where it left off.
def evaluate_network(lr,neurons,layers):
avg_loss = []
for i in range(1):
global model
model = init_model(layers,neurons)
#lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([6000],[2e-2,5e-3])
global optim
optim = tf.keras.optimizers.Adam(learning_rate=lr/10000)
# Number of training epochs
iters = 15000
for i in range(iters+1):
loss = train_step()
avg_loss.append(loss)
reset_seeds()
K.clear_session()
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
del model
del optim
K.clear_session()
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
reset_seeds()
avg = sum(avg_loss)/len(avg_loss)
print(avg)
return -math.log(avg)
where reset_seed() is defined as follows:
def reset_seeds():
np.random.seed(1)
random.seed(2)
if tf.__version__[0] == '2':
tf.random.set_seed(3)
else:
tf.set_random_seed(3)
As can be seen in the code I pretty much tried everything but no solution works.