2

I have two vectors containing tensors of shape (3,3) and shape (3,3,3,3) respectively. The vectors have the same length, I am computing the element-wise tensor dot of these two vectors . For example, want to vectorise the following computation to improve performance:

a = np.arange(9.).reshape(3,3)
b = np.arange(81.).reshape(3,3,3,3)
c = np.tensordot(a,b)

a_vec = np.asanyarray([a,a])
b_vec = np.asanyarray([b,b])    
c_vec = np.empty(a_vec.shape)

for i in range(c_vec.shape[0]):
    c_vec[i, :, :] = np.tensordot(a_vec[i,:,:], b_vec[i,:,:,:,:])

print(np.allclose(c_vec[0], c))
# True

I thought about using numpy.einsum but can't figure out the correct subscripts. I have tried a lot of different approaches but failed so far on all of them:

# I am trying something like this
c_vec = np.einsum("ijk, ilmno -> ijo", a_vec, b_vec)

print(np.allclose(c_vec[0], c))
# False

But this does not reproduce the iterative computation I want above. If this can't be done using einsum or there is a more performant way to do this, I am open for any kind of solutions.

Divakar
  • 218,885
  • 19
  • 262
  • 358
T A
  • 1,677
  • 4
  • 21
  • 29
  • what's the point of doubling the input arrays? This part: `a_vec = np.asanyarray([a,a])` – Marat Aug 28 '20 at 21:19
  • @Divakar yes exactly, I am trying to vectorise the loop. I have *a lot* of tensors and hope to speed it up this way. – T A Aug 28 '20 at 21:21
  • @Marat There is no point really, I am just using this as a sanity check. – T A Aug 28 '20 at 21:23

2 Answers2

4

Vectorized way with np.einsum would be -

c_vec = np.einsum('ijk,ijklm->ilm',a_vec,b_vec)
Divakar
  • 218,885
  • 19
  • 262
  • 358
0

tensor_dot has an axes argument you can use too:

c_vec = np.tensordot(a_vec, b_vec, axes=([1, 2], [1, 2]))
Eric
  • 95,302
  • 53
  • 242
  • 374