Questions tagged [numpy-einsum]

NumPy's `einsum` function implements the Einstein summation convention for multidimensional array objects. Use this tag for questions about how `einsum` can be applied to a particular problem in NumPy, or more questions about how the function works.

NumPy's einsum function implements the Einstein summation convention for multidimensional array objects. This allows many operations involving the multiplication or summation of values along particular axes to be expressed succinctly.

249 questions
0
votes
1 answer

operation of Einstein sum of 3D matrices

The following code indicates that the Einstein sum of two 3D (2x2x2) matrices is a 4D (2x2x2x2) matrix. $ c_{ijlm} = \Sigma_k a_{i,j,k}b_{k,l,m} $ $ c_{0,0,0,0} = \Sigma_k a_{0,0,k}b_{k,0,0} = 1x9 + 5x11 = 64 $ But, c_{0,0,0,0} = 35 according to…
techie11
  • 1,243
  • 15
  • 30
0
votes
1 answer

Accumulated sum of 2D array

Suppose I have a 2D numpy array like below dat = np.array([[1,2],[3,4],[5,6],[7,8]) I want to get a new array with each row equals to the sum of its previous rows with itself, like the following first row: [1,2] second row: [1,2] + [3,4] =…
Nicolas H
  • 535
  • 3
  • 13
0
votes
1 answer

Reshaping before as_strided for optimisation

def forward(x, f, s): B, H, W, C = x.shape # e.g. 64, 16, 16, 3 Fh, Fw, C, _ = f.shape # e.g. 4, 4, 3, 3 # C is redeclared to emphasise that the dimension is the same Sh, Sw = s # e.g. 2, 2 strided_shape = B, 1 + (H - Fh)…
Nihar Karve
  • 230
  • 4
  • 15
0
votes
1 answer

How to avoid using a for loop using either tensors or einsum?

I have the following problem at hand. F is a NumPy array of dimensions 2 X 100 X 65. I want to generate another array V whose dimensions are 2 X 2 X 65. This array V must be computed in the following way: For each t, V[:, :, t] = F[:, :, t] @ F[:,…
Raul Guarini Riva
  • 651
  • 1
  • 10
  • 20
0
votes
0 answers

How to improve this bottleneck calculation in Python (use of C++?)

I have been working on a project where at some point I require high optimization for the algorithm used in the calculations. I would like to know which way is better to go, if this can be done efficiently in Python, or on the other hand, I should…
Zarathustra
  • 391
  • 1
  • 12
0
votes
1 answer

Dot-product a list of Matrices in numpy

Let's generate a 'list of three 2x2 matrices' that I call M1, M2 and M3: import numpy as np arr = np.arange(4*2*2).reshape((3, 2, 2)) I want to take the dot product of all these matrices: A = M1 @ M2 @ M3 What's the easiest and fastest way to do…
0
votes
1 answer

Use numpy.einsum to calculate the covariance matrix of data

My aim is to calculate the covariance matrix of a set of data using numpy.einsum. Take for instance example_data = np.array([0.2, 0.3], [0.1, 0.2]]) The following is code I tried: import numpy as np d = example_data[0].shape[1] mu =…
Pazu
  • 267
  • 1
  • 7
0
votes
1 answer

Inner product of Tensors

Can someone please explain me how to do inner product of two tensors in python to get one dimensional array. For example, I have two tensors with size (6,6,6,6,6) and (6,6,6,6). I need an one dimensional array of size (6,1) or (1,6).
0
votes
0 answers

Tensordot equivalent of einsum 'ij, ijk -> ik'

I am not using numpy but Eigen::Tensor C++ API, which only has contraction operations, this is just to enable me think through implementation from python. So 'ij, ijk -> ik' is basically like doing a for loop for each of the first dimensions. a =…
jack
  • 157
  • 5
0
votes
2 answers

Increase speed of numpy operations on large number of vectors

I would like a faster implementation of the functions shown below. Ideally the code should work when number_points variable is set to 400-500. Is there any way I can improve the function definitions to increase speed (see sample run)? Here is my…
john
  • 57
  • 1
  • 7
0
votes
1 answer

Multiplying a 4D tensor with a 3D tensor using numpy einsum or tensordot

I have a (2, 5, 3) 3D tensor and a (2, 5, 4, 3) 4D tensor and I am trying to compute a row-wise product between them in the following manner: As an example, consider the following 3D and 4D tensor: A = [[[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9…
Lin Quincy
  • 13
  • 2
0
votes
1 answer

how do I interpret np.einsum("ijij->ij"

I am trying to make sense of np.einsum, and there does not appear to be examples related to my specific context. There are many good examples in the numpy docs, a guide here, here, and a stackoverflow answer here. However There is no example…
0
votes
2 answers

Example of numpy combining elementwise (hadamard) and outer product of 3D array by vectorization or einsum

Suppose I have 2 3D matrices A and B A.shape = [ 2, 50, 60] B.shape = [ 3, 50, 60] conceptually, I see the matrices like column vectors A = [ a0, a1 ] where a0, a1 are matrices of shape [50,60] [ a0 a1 ] B = [ b0, b1 , b2] where b0, b1 , b2…
palazzo train
  • 3,229
  • 1
  • 19
  • 40
0
votes
1 answer

Making numpy einsum faster for multidimensional tensors

I have some code that uses the following einsum: y = np.einsum('wxyijk,ijkd->wxyd', x, f) where (for example) the shape of x is (64, 26, 26, 3, 3, 3) and the shape of f is (3, 3, 3, 1), both having dtype=float %timeit np.einsum('wxyijk,ijkd->wxyd',…
Nihar Karve
  • 230
  • 4
  • 15
0
votes
0 answers

Efficient contraction of Levi-Civita tensor with Numpy einsum

I want to contract large, n dimensional vectors with the Levi Civita tensor. If I want to use Numpy's einsum function, I have to define the Levi Civita tensor in advance, which quickly blows up my computer's memory at n dimensions. I would just have…