0

I have a 3D torch tensor with dimension of [Batch_size, n, n] which is the out put of a layer of my network and a constant 2D torch tensor with size of [n, n]. How can I perform element wise multiplication over the batch size which should resulted in a torch tensor with size of [Batch_size, n, n]?

I know it is possible to implement this operation using explicit loop but I am interested in the most efficient way.

Jason Aller
  • 3,541
  • 28
  • 38
  • 38
user1538653
  • 121
  • 1
  • 1
  • 10

1 Answers1

1

One option is that you can expand your weight matrix to have a matching batch dimension (without using any additional memory). E.g. twoDTensor.expand((batch_size, n, n)) returns the same underlying data, but representing a 3D tensor. You can see that the stride for the batch dim is zero.

nairbv
  • 4,045
  • 1
  • 24
  • 26