2

I train a neural network in tensorflow. After each optimization step I want to keep the weights before the update. So if the optimization step wasn't good, I can go back to the weights before the optimization step.

At the moment I'm trying to do following:

  1. Copy the tensorflow session with original_session = copy.copy(session)

  2. Using Adam Optimizer to train the batch

  3. Close the bad performing session with session.close()

  4. Continue with the existing session

I'm running into problems with this method. The process just quits with error code 139 without any error message.

For me it's important to not save the model to the hard disk with a checkpoint file because of performance issues. I just want to keep a copy of the network in the memory.

Do you have some ideas how to do this in tensorflow?

Thank you!

DeepC1
  • 31
  • 1
  • 3

1 Answers1

0

You could just use seperate Graphs like this:

g1 = tf.Graph()
g2 = tf.Graph()

with g1.as_default():
  # build your 1st model
  sess1 = tf.Session(graph=g1)
  # do some work with sess1 on g1
  sess1.run(...)

with g2.as_default():
  # build your 2nd model
  sess2 = tf.Session(graph=g2)
  # do some work with sess2 on g2
  sess2.run(...)

with g1.as_default():
  # do some more work with sess1 on g1 
  sess1.run(...)

with g2.as_default():
  # do some more work with sess2 on g2
  sess2.run(...)

sess1.close()
sess2.close()
  • Between graph replication is described here in case you Need it.
  • Also you could look into the variables reuse functionality.
mrk
  • 8,059
  • 3
  • 56
  • 78
  • Thank you. But this doesn't really answer the question. Because I don't want to share variables between different graphs. I want to make one optimization step on my network and keep the weights right before the optimization step. So theoretically, I could go back to the weights before that optimization step. – DeepC1 Oct 06 '18 at 17:11