3

I trained a network with TFRecord input pipeline. In other words, there was no placeholders. Simple example would be:

input, truth = _get_next_batch()  # TFRecord. `input` is not a tf.placeholder
net = Model(input)
net.set_loss(truth)
optimizer = tf...(net.loss)

Let's say, I acquired three files, ckpt-20000.meta, ckpt-20000.data-0000-of-0001, ckpt-20000.index. I understood that, later one can import the meta-graph using the .meta file and access tensors such as:

new_saver = tf.train.import_meta_graph('ckpt-20000.meta')
new_saver.restore(sess, 'ckpt-20000')
logits = tf.get_collection("logits")[0]

However, the meta-graph does not have a placeholder from the beginning in the pipeline. Is there a way that I can use meta-graph and query inference of an input?

For information, in a query application (or a script), I used to define a model with a placeholder and restored model weights (see below). I am wondering if I can just utilize the meta-graph without re-definition since it would be much more simple.

input = tf.placeholder(...)
net = Model(input)
tf.restore(sess, 'ckpt-2000')
lgt = sess.run(net.logits, feed_dict = {input:img})
YW P Kwon
  • 2,138
  • 5
  • 23
  • 39

3 Answers3

9

You can build a graph that uses placeholder_with_default() for the inputs, so can use both TFRecord input pipeline as well as feed_dict{}.

An example:

input, truth = _get_next_batch()
_x = tf.placeholder_with_default(input, shape=[...], name='input')
_y = tf.placeholder_with_default(truth, shape-[...], name='label')

net = Model(_x)
net.set_loss(_y)
optimizer = tf...(net.loss)

Then during inference,

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
  new_saver = tf.train.import_meta_graph('ckpt-20000.meta')
  new_saver.restore(sess, 'ckpt-20000')

  # Get the tensors by their variable name
  input = loaded_graph.get_tensor_by_name('input:0')
  logits = loaded_graph.get_tensor_by_name(...)

  # Now you can feed the inputs to your tensors
  lgt = sess.run(logits, feed_dict = {input:img})

In the above example, if you don't feed input, then the input will be read from the TFRecord input pipeline.

Vijay Mariappan
  • 16,921
  • 3
  • 40
  • 59
  • Thanks, you made my day! I never knew about the `placeholder_with_default`! – YW P Kwon Jun 28 '17 at 00:51
  • 1
    @vijay m .. This is suitable when you have access to the graph you have previously coded. Is there a way to add a placeholder for a pre-trained model from Online ? How to go about this. Any idea ? – lamo_738 Mar 18 '19 at 18:13
4

Is there a way to do it without placeholders at test though? It should be possible to re-use the graph with a new input pipeline without resorting to slow placeholders (i.e. the test dataset may be very large). placeholder_with_default is a suboptimal solution in that case.

Jason
  • 165
  • 7
  • There seems to be a way of doing this with tensorflow 1.6, as shown [in this answer](https://stackoverflow.com/a/49236050/4885324). – metastableB Mar 12 '18 at 13:11
0

The recommended way is saving two meta graphs. One is for Training/Validation/Testing, and the other one is for inference.

see Building a SavedModel

export_dir = ...
...
builder = tf.saved_model_builder.SavedModelBuilder(export_dir)
with tf.Session(graph=tf.Graph()) as sess:
  ...
  builder.add_meta_graph_and_variables(sess,
                                       [tag_constants.TRAINING],
                                       signature_def_map=foo_signatures,
                                       assets_collection=foo_assets)
...
# Add a second MetaGraphDef for inference.
with tf.Session(graph=tf.Graph()) as sess:
  ...
  builder.add_meta_graph([tag_constants.SERVING])
...
builder.save()

The NMT tutorial also provides a detailed example about creating multiple graphs with shared variables: Neural Machine Translation (seq2seq) Tutorial-Building Training, Eval, and Inference Graphs

train_graph = tf.Graph()
eval_graph = tf.Graph()
infer_graph = tf.Graph()

with train_graph.as_default():
  train_iterator = ...
  train_model = BuildTrainModel(train_iterator)
  initializer = tf.global_variables_initializer()

with eval_graph.as_default():
  eval_iterator = ...
  eval_model = BuildEvalModel(eval_iterator)

with infer_graph.as_default():
  infer_iterator, infer_inputs = ...
  infer_model = BuildInferenceModel(infer_iterator)

checkpoints_path = "/tmp/model/checkpoints"

train_sess = tf.Session(graph=train_graph)
eval_sess = tf.Session(graph=eval_graph)
infer_sess = tf.Session(graph=infer_graph)

train_sess.run(initializer)
train_sess.run(train_iterator.initializer)

for i in itertools.count():

  train_model.train(train_sess)

  if i % EVAL_STEPS == 0:
    checkpoint_path = train_model.saver.save(train_sess, checkpoints_path, global_step=i)
    eval_model.saver.restore(eval_sess, checkpoint_path)
    eval_sess.run(eval_iterator.initializer)
    while data_to_eval:
      eval_model.eval(eval_sess)

  if i % INFER_STEPS == 0:
    checkpoint_path = train_model.saver.save(train_sess, checkpoints_path, global_step=i)
    infer_model.saver.restore(infer_sess, checkpoint_path)
    infer_sess.run(infer_iterator.initializer, feed_dict={infer_inputs: infer_input_data})
    while data_to_infer:
      infer_model.infer(infer_sess)