To not to carry optimizer and gradient nodes into inference environment, I'm trying to create two versions of graph - one with training nodes and the other one without.
And idea was to use tensorflow.train.Saver
to pass variables from train graph version into inference graph version.
So I've tried the following:
# Create training graph
trainingGraph = tf.Graph()
with (trainingGraph.as_default()):
trainOp, lossOp = self.CreateTrainingGraph()
trainInitOp = tf.initialize_variables(tf.all_variables(), "init_variables")
# Add saver op
self.saverOp = tf.train.Saver()
# Create inference graph
inferenceGraph = tf.Graph()
with (inferenceGraph.as_default()):
self.CreateInferenceGraph()
# Add saver op, compatible with training saver
tf.train.Saver(saver_def=self.saverOp.as_saver_def())
In this case CreateTrainingGraph()
calls CreateInferenceGraph()
and adds optimizer and loss on top of it.
For some reason, tf.train.Saver
constructor doesn't add save/restore_all
node into the inference graph (or I just don't understand what saver_def
option does). I've tried empty constructor and
sess.run([model.saverOp._restore_op_name],
{ model.saverOp._filename_tensor_name : "Params/data.pb" })
failed with error
<built-in function delete_Status> returned a result with an error set
What is the proper way to achieve this?