2

I might do a poor job of explaining this, so I'll use an example somewhat similar to my problem, but here goes:

I need to compute a complex operation (repeatedly) that is a function of only a few scalars, say, x1, x2 and x3. Then a matrix whose elements are determined by a function of x1, x2, x3 (and the row and column location) is computed, called X.

Several very large matrix products are involved in intermediate steps, A, B and C. The values of the matrices involved in these products are constant.

Say we have matrices whose sizes are:

  • A: k by n
  • B: n by n
  • C: n by k
  • X: 3 by k

where n is very very large, and k is pretty large.

The function does (pseudocode):

func = function(x1, x2, x3)
  X = make_X(x1, x2, x3)
return X * A * B * C * X^t

where * is the regular dot product. The output will only be a 3 by 3 matrix!

I (naively possibly) think that there must be some automatic way to efficiently compile this down into essentially 9 functions of x1, x2, x3 -- one for each element of the output matrix.

I use numpy/scipy fairly often, but have no experience with sympy or theano, though they seem to be in the ballpark. Any suggestions on how to solve this problem?

P.S., any code packages that address this would preferably be in python, but they don't have to be, so long as they are callable from python.

bill_e
  • 930
  • 2
  • 12
  • 24
  • Well there's `numpy.linalg.multi_dot` but unfortunately it won't help in this case. Are any of the matrices sparse by any chance? –  Jun 28 '16 at 17:03
  • Didn't know about multi_dot, thanks! Definitely not sparse. That'd be nice. – bill_e Jun 28 '16 at 23:33

2 Answers2

1

Expressing the problem in einsum terms might help:

np.einsum('ij,jk,kl,lm,nm->in', X, A, B, C, X)

which might profitably be broken into 2 steps:

ABC = np.einsum('jk,kl,lm->jm', A, B, C) # k by k
np.einsum('ij,jm,nm->in', X, ABC, X)

So the i,j element of the result is:

R[i,j] = np.einsum('j,jm,m->', X[i,:], X[j,:])

and with the new @ operator (in this case just operator version of dot)

R = X@A@B@C@(X.T)

For k,n=10,20 this last is fastest.

If you are doing this for different combinations of x1,x2,x3, but one set of A,B,C, then doing ABC=A@B@C should be a time saver. But I have doubts about the value of breaking X@ABC@(X.T) into 9 steps. ABC is kxk, so you've already done the larger calculations involving B.

hpaulj
  • 221,503
  • 14
  • 230
  • 353
  • Thanks, this definitely helps for this specific problem, but like I said, my actual problem is only similar in spirit. I was hoping to be clever and avoid calculating something like ABC entirely, and just somehow compiling expressions for each element of the final 3x3 matrix. – bill_e Jun 28 '16 at 23:49
  • Step back from the programming, and look at the mathematics. Is that kind of 'compiling' even possible. Think of all the degrees of freedom in those three big arrays. How can you hide all that in a single expression? – hpaulj Jun 29 '16 at 00:41
1

Since you mentioned SymPy, you can easily compute the whole thing once with

x1, x2, x3 = symbols('x1, x2, x3')
X = Matrix(...)
A = Matrix(...)
B = Matrix(...)
C = Matrix(...)

(replace ... with the actual matrix entries). Then compute the product

result = X*A*B*C*X.T

You can then use lambdify on this matrix to convert this into a numeric function that can be used with numpy or scipy

func = lambdify([x1, x2, x3], result)
asmeurer
  • 86,894
  • 26
  • 169
  • 240