Questions tagged [numpy-einsum]

NumPy's `einsum` function implements the Einstein summation convention for multidimensional array objects. Use this tag for questions about how `einsum` can be applied to a particular problem in NumPy, or more questions about how the function works.

NumPy's einsum function implements the Einstein summation convention for multidimensional array objects. This allows many operations involving the multiplication or summation of values along particular axes to be expressed succinctly.

249 questions
2
votes
1 answer

Comparison of Looping and Einsum Operations on a List of Arrays for Speed Optimization

I want to make use of the einsum to speed up a code as following: As simple example using a list of 2 (3,3) arrays called dcx: That i created: In [113]: dcx = [np.arange(9).reshape(3,3), np.arange(10,19).reshape(3,3)]; dcx Out[113]: [array([[0, 1,…
2
votes
1 answer

In PyTorch, how can I avoid an expensive broadcast when adding two tensors then immediately collapsing?

I have two 2-d tensors, which align via broadcasting, so if I add/subtract them, I incur a huge 3-d tensor. I don't really need that though, since I'll be performing a mean on one dimension. In this demo, I unsqueeze the tensors to show how they…
Josh.F
  • 3,666
  • 2
  • 27
  • 37
2
votes
2 answers

Translating np.einsum to something more performant

Using python/numpy, I have the following np.einsum: np.einsum('abde,abc->bcde', X, Y) Y is sparse: for each [a,b], only one c == 1; all others := 0. For an example of relative size of the axes, X.shape is on the order of (1000, 5, 30, 30), and…
Faydey
  • 725
  • 1
  • 5
  • 12
2
votes
2 answers

numpy.einsum substantially speeds up computation - but numpy.einsum_path shows no speedup, what am I missing?

I have an odd case where I can see numpy.einsum speeding up a computation but can't see the same in einsum_path. I'd like to quantify/explain this possible speed-up but am missing something somewhere... In short, I have a matrix multiplication where…
ajquinn
  • 23
  • 3
2
votes
0 answers

NumPy einsum explicit mode with programmatic interface

NumPy's einsum lets you explicitly choose which axes are contracted with the so-called explicit mode, using ->: >>> a = np.arange(9).reshape((3, 3)) >>> np.einsum('ij,ij->j', a, a) array([45, 66, 93]) einsum also lets you specify a programmatic…
asmeurer
  • 86,894
  • 26
  • 169
  • 240
2
votes
1 answer

Efficient contractions with NumPy einsum paths

I have a set of contractions that I would like to optimize; for the contractions I am using np.einsum() from the NumPy module. The minimal reproducible example is here: import numpy as np from time import time d1=2 d2=3 d3=100 a = np.random.rand(…
Zarathustra
  • 391
  • 1
  • 12
2
votes
1 answer

Einsum in python for a complex loop

I have complex loops in python that I'm trying to "vectorize" to improve computation time. I found the function np.einsum allowing it, I managed to use it, but I'm stuck with another loop. In the following code, I put the loop I managed to…
Thomas
  • 263
  • 3
  • 14
2
votes
2 answers

Einsum formula for repeating dimensions

I have this piece of code: other = np.random.rand((m,n,o)) prev = np.random.rand((m,n,o,m,n,o)) mu = np.zeros((m,n,o,m,n,o)) for c in range(m): for i in range(n): for j in range(o): mu[c,i,j,c,i,j] =…
2
votes
1 answer

How to vectorize multiple matrix multiplication

I have a 2d matrix A[1000*90] and B[90*90*1000] I would like to calculate C[1000*90] For i in range(1000) C[i,:]=np.matmul(A[i,:],B[:,:,i] I understand if I use a vectorized formula it's going to be faster, seems like einsum might be the…
2
votes
2 answers

How to perform matrix multiplication between two 3D tensors along the first dimension?

I wish to compute the dot product between two 3D tensors along the first dimension. I tried the following einsum notation: import numpy as np a = np.random.randn(30).reshape(3, 5, 2) b = np.random.randn(30).reshape(3, 2, 5) # Expecting shape: (3,…
Killswitch
  • 346
  • 2
  • 12
2
votes
1 answer

One line einsum functions with "interleaved" output indexing impossible to recreate using tensordot?

The similarities and differences between NumPy's tensordot and einsum functions are well documented and have been extensively discussed in this forum (e.g. [1], [2], [3], [4], [5]). However, I've run into an instance of matrix multiplication using…
ryanhill1
  • 165
  • 1
  • 9
2
votes
1 answer

Tensormultiplication with einsum

I have a tensor phi = np.random.rand(n, n, 3) and a matrix D = np.random.rand(3, 3). I want to multiply the matrix D along the last axis of phi so that the output has shape (n, n, 3). I have tried this np.einsum("klj,ij->kli", phi, D) But I am not…
dba
  • 325
  • 1
  • 6
  • 16
2
votes
1 answer

Numpy: multiplying (1/2)^k for each row of np.array for each array in a list

Suppose I have the following list of array dat = [np.array([[1,2],[3,4]]), np.array([[5,6]]), np.array([[1,2],[7,8],[2,3]]), np.array([[1,2],[3,4]])] Now, for each elements in the list, I want to multiply the row of the array with (1/2)^k where k…
kevin
  • 85
  • 1
  • 5
2
votes
1 answer

Python fast array multiplication for multidimensional arrays

I have two 3-dimensional arrays, A, B, where A has dimensions (500 x 500 x 80), and B has dimensions (500 x 80 x 2000). In both arrays the dimension that has the size 80 can be called 'time' (e.g. 80 timepoints i). The dimension that has the size…
user2743931
  • 174
  • 9
2
votes
2 answers

Numpy einsum compute outer product along axis

I have two numpy arrays that contain compatible matrices and want to compute the element wise outer product of using numpy.einsum. The shapes of the arrays would be: A1 = (i,j,k) A2 = (i,k,j) Therefore the arrays contain i matrices of shape (k,j)…
T A
  • 1,677
  • 4
  • 21
  • 29