I train a neural network in tensorflow. After each optimization step I want to keep the weights before the update. So if the optimization step wasn't good, I can go back to the weights before the optimization step.
At the moment I'm trying to do following:
Copy the tensorflow session with original_session = copy.copy(session)
Using Adam Optimizer to train the batch
Close the bad performing session with session.close()
Continue with the existing session
I'm running into problems with this method. The process just quits with error code 139 without any error message.
For me it's important to not save the model to the hard disk with a checkpoint file because of performance issues. I just want to keep a copy of the network in the memory.
Do you have some ideas how to do this in tensorflow?
Thank you!