2

Code:

a=training_dataset.map(lambda x,y: (tf.pad(x,tf.constant([[13-int(tf.shape(x)[0]),0],[0,0]])),y))

gives the following error:

TypeError: in user code:

<ipython-input-32-b25101c2110a>:1 None  *
    a=training_dataset.map(lambda x,y: (tf.pad(tensor=x,paddings=tf.constant([[13-int(tf.shape(x)[0]),0],[0,0]]),mode="CONSTANT"),y))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:264 constant  **
    allow_broadcast=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:282 _constant_impl
    allow_broadcast=allow_broadcast))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py:456 make_tensor_proto
    _AssertCompatible(values, dtype)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py:333 _AssertCompatible
    raise TypeError("Expected any non-tensor type, got a tensor instead.")

TypeError: Expected any non-tensor type, got a tensor instead.

However, when I use:

a=training_dataset.map(lambda x,y: (tf.pad(x,tf.constant([[1,0],[0,0]])),y))

Above code works fine. This brings me to the conclusion that something is wrong with: 13-tf.shape(x)[0] but cannot understand what. I tried converting the tf.shape(x)[0] to int(tf.shape(x)[0]) and still got the same error.

What I want the code to do: I have a tf.data.Dataset object having variable length sequences of size (None,128) where the first dimension(None) is less than 13. I want to pad the sequences such that the size of every collection is 13 i.e (13,128). Is there any alternate way (if the above problem cannot be solved)?

1 Answers1

4

A solution that works:

using:

paddings = tf.concat(([[13-tf.shape(x)[0],0]], [[0,0]]), axis=0)

instead of using:

paddings = tf.constant([[13-tf.shape(x)[0],0],[0,0]])

works for me. However, I still cannot figure out why the latter one did not work.