Questions tagged [numpy-einsum]

NumPy's `einsum` function implements the Einstein summation convention for multidimensional array objects. Use this tag for questions about how `einsum` can be applied to a particular problem in NumPy, or more questions about how the function works.

NumPy's einsum function implements the Einstein summation convention for multidimensional array objects. This allows many operations involving the multiplication or summation of values along particular axes to be expressed succinctly.

249 questions
4
votes
2 answers

Computation of variable interaction (dot product of vectors in a matrix)

If I multiply a vector x (1,n) with itself tansposed, i.e. np.dot(x.T, x) I will get a matrix in quadratic form. If I have a matrix Xmat (k, n), how can I efficiently compute rowwise dot product and select only upper triangular elements? So, atm. I…
Drey
  • 3,314
  • 2
  • 21
  • 26
4
votes
0 answers

numpy.einsum sometimes ignores dtype argument

Suppose I have two arrays of type int8. I want to use einsum on them in such a way that all the calculations will be done as int64, but I don't want to convert the whole arrays to int64. If I understand correctly, this is what the dtype argument is…
ea1
  • 61
  • 3
4
votes
1 answer

Python `expm` of an `(N,M,M)` matrix

Let A be an (N,M,M) matrix (with N very large) and I would like to compute scipy.linalg.expm(A[n,:,:]) for each n in range(N). I can of course just use a for loop but I was wondering if there was some trick to do this in a better way (something like…
HolyMonk
  • 432
  • 6
  • 17
4
votes
1 answer

numpy dot product for tensors (3d times 2d)

Currently I use Na = (3, 2, 4) Nb = Na[1:] A = np.arange(np.prod(Na)).reshape(Na) b = np.arange(np.prod(Nb)).reshape(Nb) I want to calculate: r = np.empty((A.shape[0], A.shape[2]) for i in range(A.shape[2]): r[:, i] = np.dot(A[:, :, i], b[:,…
cknoll
  • 2,130
  • 4
  • 18
  • 34
4
votes
0 answers

4d Array Processing (using einsum?)

I have a matrix-based problem which I think could be solved (computationally cheaply) in a single line of code using numpy (perhaps einsum?), but can't get to the solution. I wonder if anyone can make any suggestions please? The problem is as…
SLater01
  • 459
  • 1
  • 6
  • 17
4
votes
1 answer

numpy: get rid of for loop by broadcasting

I am trying to implement the Expectation Maximization Algorithm for Gaussian Mixture Model in python. I have following line to compute the gaussian probability p of my data X given the mean mu and covariance sigma of the Gaussian distribution: for i…
marilou
  • 43
  • 2
4
votes
2 answers

cross products with einsums

I'm trying to compute the cross-products of many 3x1 vector pairs as fast as possible. This n = 10000 a = np.random.rand(n, 3) b = np.random.rand(n, 3) numpy.cross(a, b) gives the correct answer, but motivated by this answer to a similar question,…
Nico Schlömer
  • 53,797
  • 27
  • 201
  • 249
4
votes
2 answers

Calculation/manipulation of numpy array

Looking to make the this calculation as quickly as possible. I have X as n x m numpy array. I want to define Y to be the following: Y_11 = 1 / (exp(X_11-X_11) + exp(X_11-X_12) + ... exp(X_11 - X_1N) ). or for Y_00 1/np.sum(np.exp(X[0,0]-X[0,:])) So…
Kevin
  • 447
  • 4
  • 13
4
votes
1 answer

Is numpy.einsum efficient compared to fortran or C?

I have written a numpy program which is very time consuming. After profiling it, I found that most of the time is spent in numpy.einsum. Although numpy is a wrapper of LAPACK or BLAS, I don't know whether numpy.einsum's performance is comparable to…
atbug
  • 818
  • 6
  • 26
4
votes
2 answers

Simplifying double einsum

I'm trying to use numpy.einsum to simplify a loop I have in my code. Currently, my code looks something like this: k = 100 m = 50 n = 10 A = np.arange(k*m*n).reshape(k, m, n) B = np.arange(m*m).reshape(m, m) T = np.zeros((n, n)) for ind in…
Javier C.
  • 137
  • 7
4
votes
1 answer

Ellipsis broadcasting in numpy.einsum

I'm having a problem understanding why the following doesn't work: I have an array prefactor that can be three-dimensional or six-dimensional. I have an array dipoles that has four dimensions. The first three dimensions of dipoles match the last…
jan
  • 1,408
  • 13
  • 19
3
votes
1 answer

Einsum is slow for tensor multiplication

I'm trying to optimize a particular piece of code to calculate the mahalanobis distance in a vectorized manner. I have a standard implementation which used traditional python multiplication, and another implementation which uses einsum. However, I'm…
3
votes
2 answers

Speeding up einsum for specific matrix and vector size

I have 2 arrays, one is of size: A = np.random.uniform(size=(48, 1000000, 2)) and the other is B = np.random.uniform(size=(48)) I want to do the following summation: np.einsum("i, ijk -> jk", B, A) as fast as possible. The summation would need to…
John
  • 465
  • 1
  • 6
  • 13
3
votes
1 answer

Memory usage of torch.einsum

I have been trying to debug a certain model that uses torch.einsum operator in a layer which is repeated a couple of times. While trying to analyze the GPU memory usage of the model during training, I have noticed that a certain Einsum operation…
ofir1080
  • 105
  • 1
  • 5
3
votes
1 answer

Why is the optimize argument False by default in np.einsum?

Why is the default not optimize=True or one of the specific optimization options? I'm asking this because as a user of course I want the most optimal computation by default.
1 2
3
16 17