I have trained a seq2seq tensorflow model for translating a sentence from English to Spanish. I trained a model for 615 700 steps, and save the model checkpoints successfully. My training data size for both English and Spanish sentences is 200 000. I want to retrain this model for 10K new data sentences from 615 700 steps. I am using sequence to sequence tensoflow model for this. How can I start retrain model from the last checkpoint? Here is the link that I am usingfor the translation.
I have 3 types of files in my train folder:
.index
.meta
.data
and checkpoint file.
My new training data set files are europarl_train.es-en.en
and europarl_train.es-en.es
for English and Spanish sentences respectively.
I write a code to load my model .meta file and weights
import data_utils
import seq2seq_model
import translate
import tensorflow as tf
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/home/i9/L-T_Model_Training/16_NOV_MODEL/train/translate.ckpt-615700.meta')
saver.restore(sess,tf.train.latest_checkpoint('/home/i9/L-T_Model_Training/16_NOV_MODEL/train/.'))
How can I start retaining for this dataset?