0

Why do I get this error for slim.fully_connected()?

ValueError: Input 0 of layer fc1 is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [32]

my input is Tensor("batch:0", shape=(32,), dtype=float32) from tf.train.batch()

  inputs, labels = tf.train.batch(
        [input, label],
        batch_size=batch_size,
        num_threads=1,
        capacity=2 * batch_size)

if I reshape the input to (32,1) it works fine.

inputs, targets = load_batch(train_dataset)
print("inputs:", inputs, "targets:", targets)
# inputs: Tensor("batch:0", shape=(32,), dtype=float32) targets: Tensor("batch:1", shape=(32,), dtype=float32)

inputs = tf.reshape(inputs, [-1,1])
targets = tf.reshape(targets, [-1,1])

The examples in slim walkthrough seem to work without explicitly reshaping after load_batch()

michael
  • 4,377
  • 8
  • 47
  • 73

1 Answers1

0

tf.train.batch expects array like inputs because scalars are quite rare (practically speaking). So, you have to reshape your input. I think the next code snippet would clear things out.

>>> import numpy as np
>>> a = np.array([1,2,3,4])
>>> a.shape
(4,)
>>> a = np.reshape(a,[4,1])
>>> a
array([[1],
       [2],
       [3],
       [4]])
>>>  
Souradeep Nanda
  • 3,116
  • 2
  • 30
  • 44
  • That's what I thought, but my np.array looks like this `shape x= (5, 1) [[4.6446195] [9.981602 ] [7.1564007] [8.539308 ] [2.0904353]]` and I'm using `tf.train.Feature(float_list=tf.train.FloatList(value= inputs[m].tolist() ))` so I can't believe it comes out as an array of scalars and not an array of arrays. – michael Mar 02 '18 at 07:02
  • I have never used `tf.train.FloatList` before but apparently it is running a `tf.squeeze` before saving the data. The API docs is empty, will have to verify by looking at the code. – Souradeep Nanda Mar 02 '18 at 14:32