To avoid overfitting it's necessary, after the X steps, to hold the training and validate its result. If the curve (iterations x loss) for validation crosses the curve (iterations x loss) for training I need to stop the train.
How can I validate the training result to avoid overfitting?
def train(self, dataset):
num_samples = len(dataset)
print('Training...')
tic = time.time()
with tf.compat.v1.Session() as sess:
# start a tensorflow session and initialize all variables
sess.run(tf.compat.v1.global_variables_initializer())
for i in range(self.epoch): # iterate through the number of cycles=
for j in range(num_samples): # one-by-one train the neural network on a data item
loss, _ = sess.run([self.loss, self.train_op], feed_dict={self.x:[dataset[j]]})
if i % 10 == 0:
ram_train.append(cpu_usage(1))
print(f'epoch {i}: loss = {loss}')
self.saver.save(sess, f'./model_hidden{self.hidden}_wdw{self.window}.ckpt')
self.saver.save(sess, f'./model_hidden{self.hidden}_wdw{self.window}.ckpt')
tac = time.time()
print('Done.')
return loss, ram_train, (tac - tic)
I created a class named Autoencoder and one of its methods is to train the ANN. This code is running, but the output is overfitted. I googled it and checked the TensorFlow session documentation looking for any parameter that I can include in my code but with no success.