I'm trying to reverse engineer the behavior of tf.tensordot axes parameter, but having a hard time.
Given the following code:
a = tf.constant([[1., 2.], [3., 4.], [4., 5.]])
b = tf.constant([1., 2.])
c = tf.constant([[1., 2.], [2., 3.], [3., 4.]])
print(f'Shape of c: {c.shape}')
ct = tf.transpose(c)
print(f'Shape of ct: {ct.shape}')
print('.................')
d = tf.tensordot(a, ct, axes=1)
print(f'Shape of d: {d.shape}')
print(d)
print('.................')
e = tf.tensordot(a, ct, axes=0)
print(f'Shape of e: {e.shape}')
print(e)
print('.................')
f = tf.tensordot(a, ct, axes=2)
print(f'Shape of f: {f.shape}')
print(e)
I understand how "d" is produced, but I don't understand how "e" and "f" are produced. The TensorFlow Documentation is not sufficient for me to understand.