1

I have been looking at the answer from @Danita's answer (Vectorizing code to calculate (squared) Mahalanobis Distiance), which uses np.einsum to calculate the squared Mahalanobis distance. In that case, the vectors are: X of shape (m, n), U of shape (k, n), and T of shape (k, n, n), then we can write this as:

diff = X[np.newaxis, :, :] - U[:, np.newaxis, :]
D = np.einsum('jik,jkl,jil->ij', diff, T, diff)

What I would like to do is to add a dimension to X and T. Precisely in my case, X of shape (m, n), U of shape (l, k, n), and T of shape (l, k, n, n). So the result D should be of shape (l, m, k),

However, due to me not having fully understood np.einsum, I'm having nightmares with my case. Can anyone help?

David
  • 1,688
  • 1
  • 11
  • 21

1 Answers1

0

I think what you're asking about is broadcasting. In the np.einsum docs it says that broadcasting can be controlled using ellipsis. By adding ... to your subscripts you can specify how you want it to broadcast the operation. Something like this should do what you want:

diff = X[np.newaxis, :, :] - U[:, :, np.newaxis, :]
D = np.einsum('...jik,...jkl,...jil->...ij', diff, T, diff)

Note: You say you want to add the dimension to X and T in your question but you seem to add it to U and T. My answer assumes the latter. If that's wrong you'll have to switch the extra axis in diff = ....

David
  • 1,688
  • 1
  • 11
  • 21