2

I am trying to use a trained model on Tensorflow in the inference phase. During training, I used a batch size of 8 obtained through the shuffle_batch function:

image_batch, label_batch = \
    tf.train.shuffle_batch([image, label], batch_size=8, 
                           capacity=2000, num_threads=2, 
                           min_after_dequeue=1000)

The following is the relevant part of inference phase:

with tf.Session() as sess:
    saver = tf.train.import_meta_graph('model_enc_dec-0.meta')
    saver.restore(sess, tf.train.latest_checkpoint('./'))

    graph = tf.get_default_graph()
    pred = tf.get_collection('logits_key')[0]

    image_batch_tensor = tf.get_collection('image_batch_key')[0]
    .......
    .......
    # image_loaded_from_disk is of size (1, 384, 384, 3)
    feed_dict = {image_batch_tensor:image_loaded_from_disk}
    pred_np = sess.run([pred], feed_dict=feed_dict)

The following is the error I obtain at the line of the sess.run() when I run inference on the restored model:

ValueError: Cannot feed value of shape (1, 384, 384, 3) for Tensor u'shuffle_batch:0', which has shape '(8, 384, 384, 3)'

How should the trained model be adapted to accept a single image to forward through the model?

bugvig_pk
  • 59
  • 5
  • Did you use `None` for the placeholder's shape? If so maybe it is possible just to replace `image_batch_tensor` by a new placeholder of shape (1, 384, 384, 3) – Tobias Scheithauer Aug 04 '17 at 08:04
  • No. `shuffle_batch` just returned a tensor of size (batch_size, 384, 384, 3). I couldn't choose a `None`. But, I found an option in `shuffle_batch` called `allow_smaller_final_batch=True`. I wonder if this will cause slower training speed. – bugvig_pk Aug 04 '17 at 14:47
  • Adding `allow_smaller_final_batch=True` to `tf.train.shuffle_batch` freezes the training code. It does not even train. – bugvig_pk Aug 04 '17 at 18:20
  • The important question is: How is your variable `image_batch_tensor` declared? – Tobias Scheithauer Aug 05 '17 at 08:02
  • In the training phase, I add the tensor `image_batch` to a collection called `'image_batch_key'` as follows: `tf.add_to_collection('image_batch_key', image_batch)` .This is the same tensor that is obtained after loading the meta graph. – bugvig_pk Aug 06 '17 at 01:50
  • The answer at this question might help you [https://stackoverflow.com/questions/49740247/tensorflow-mnist-estimator-batch-size-affects-the-graph-expected-input](https://stackoverflow.com/questions/49740247/tensorflow-mnist-estimator-batch-size-affects-the-graph-expected-input) – Michele Fortunato Nov 18 '19 at 21:45

0 Answers0