Questions tagged [numpy-einsum]

NumPy's `einsum` function implements the Einstein summation convention for multidimensional array objects. Use this tag for questions about how `einsum` can be applied to a particular problem in NumPy, or more questions about how the function works.

NumPy's einsum function implements the Einstein summation convention for multidimensional array objects. This allows many operations involving the multiplication or summation of values along particular axes to be expressed succinctly.

249 questions
3
votes
0 answers

einsum for sparse tensor(s) in Tensorflow TF

I want to multiply two tensors, one sparse and the other dense. The sparse one is 3D and the dense one 2D. I cannot convert the sparse tensor to a dense tensor (i.e., avoid using tf.sparse.to_dense(...)). My multiplication is given by the following…
3
votes
2 answers

How exactly does torch / np einsum work internally

This is a query regarding the internal working of torch.einsum in the GPU. I know how to use einsum. Does it perform all possible matrix multiplications, and just pick out the relevant ones, or does it perform only the required computation? For…
OlorinIstari
  • 537
  • 5
  • 20
3
votes
0 answers

CNN forward and backward with numpy einsum give different results to for loop implementation

I am trying to implement Convolutional Neural Network from scratch with Python numpy. I implemented forward and backward phases with numpy einsum (functions conv_forward and conv_backward). When I compared the results of einsum conv_forward and…
mdc
  • 53
  • 1
  • 8
3
votes
1 answer

Element-wise matrix multiplication for multi-dimensional array

I want to realize component-wise matrix multiplication in MATLAB, which can be done using numpy.einsum in Python as below: import numpy as np M = 2 N = 4 I = 2000 J = 300 A = np.random.randn(M, M, I) B = np.random.randn(M, M, N, J, I) C =…
3
votes
1 answer

Memory and time in tensor operations python

Goal My goal is to calculate the tensor given by the formula which you can see below. The indices i, j, k, l run from 0 to 40 and p, m, x from 0 to 80. Tensordot approach This summation is just contracting 6 indices of enormous tensor. I tried to…
Michal
  • 33
  • 4
3
votes
1 answer

numpy composition of einsums?

Suppose that I have a np.einsum that performs some calculation, and then pump that directly into yet another np.einsum to do some other thing. Can I, in general, compose those two einsums into a single einsum? My specific use case is that I am…
Him
  • 5,257
  • 3
  • 26
  • 83
3
votes
0 answers

How does einsum interact with numpy broadcasting?

Consider ndarrays x0=np.ones((3,3)) and y0, which has y0.shape either (3,3) or (1,3). I want a single einsum command that computes the dot products of the rows of these two arrays; in the case that y0.shape is (1,3), I want broadcasting over the…
MathManM
  • 135
  • 4
3
votes
2 answers

Using numpy einsum to compute inner product of column-vectors of a matrix

Suppose I have a numpy matrix like this: [[ 1 2 3] [ 10 100 1000]] I would like to compute the inner product of each column with itself, so the result would be: [1*1 + 10*10 2*2 + 100*100 3*3 + 1000*1000] == [101, 10004,…
Delgan
  • 18,571
  • 11
  • 90
  • 141
3
votes
1 answer

Additional information on numpy.einsum()

I am trying to understand numpy.einsum() function but the documentation as well as this answer from stackoverflow still leave me with some questions. Let's take the einstein sum and the matrices defined in the answer. A = np.array([0, 1, 2]) B =…
FenryrMKIII
  • 1,068
  • 1
  • 13
  • 30
3
votes
1 answer

How to vectorize/tensorize operations in numpy with irregular array shapes

I would like to perform the operation If had a regular shape, then I could use np.einsum, I believe the syntax would be np.einsum('ijp,ipk->ijk',X, alpha) Unfortunately, my data X has a non regular structure on the 1st (if we zero index) axis.…
gazza89
  • 151
  • 7
3
votes
1 answer

Why does `numpy.einsum` work faster with `float32` than `float16` or `uint16`?

In my benchmark using numpy 1.12.0, calculating dot products with float32 ndarrays is much faster than the other data types: In [3]: f16 = np.random.random((500000, 128)).astype('float16') In [4]: f32 = np.random.random((500000,…
satoru
  • 31,822
  • 31
  • 91
  • 141
3
votes
2 answers

Fast way to set diagonals of an (M x N x N) matrix? Einsum / n-dimensional fill_diagonal?

I'm trying to write fast, optimized code based on matrices, and have recently discovered einsum as a tool for achieving significant speed-up. Is it possible to use this to set the diagonals of a multidimensional array efficiently, or can it only…
PhysLQ
  • 149
  • 2
  • 9
3
votes
2 answers

Efficient tensor contraction in python

I have a list L of tensors (ndarray objects), with several indices each. I need to contract these indices according to a graph of connections. The connections are encoded in a list of tuples in the form ((m,i),(n,j)) signifying "contract the i-th…
Ziofil
  • 1,815
  • 1
  • 20
  • 30
3
votes
1 answer

Python tensor product

I have the following problem. For performance reasons I use numpy.tensordot and have thus my values stored in tensors and vectors. One of my calculations look like this: is the expectancy value of w_j and the expectancy value of…
HighwayJohn
  • 881
  • 1
  • 9
  • 22
3
votes
1 answer

How does architecture affect numpy array operation performance?

I have Ubuntu 14.04 with an "Anaconda" Python distribution with Intel's math kernel library (MKL) installed. My processor is an Intel Xeon with 8 cores and without Hyperthreading (so only 8 threads). For me numpy tensordot consistently outperforms…
Will Martin
  • 993
  • 6
  • 18