I have arrays e
, (shape q
by l
) f
(shape n
by l
), and w
(shape n
by l
), and I want to create an array M
where M[s,i,j] = np.sum(w[s, :] * e[i, :] * e[j, :])
, and an array F
, where F[s,j] = np.sum(w[s, :] * f[s, :] * e[j, :])
.
Both are easy enough to do by, for instance, looping through elements of M
, but I want to be more efficient (my real data has something like 1M entries of length 5k). For F
, I can use F = np.inner(w * f, e)
(which I verified produces the same answer as the loop). M
is more difficult, so the first step is to loop through dimension zero of with a list comprehension, saying that M = np.stack([np.inner(r[:] * e, e) for r in w])
(I have verified this also works the same as the loop). np.inner()
doesn't take any axes arguments, so it's not clear to me how to tell the arrays to just broadcast over all rows of w
.
Finally, I need to use elements of M
and F
to create a matrix A
, where A[s,i] = np.sum(np.linalg.inv(M[s, :, :])[i, :] * F[i, :])
. This also looks inner-product-ish, but taking lots of individual inverses is time-consuming, so is there a way to compute inverses of slices, short of looping?
Some test values in my arrays are as follows:
e = array([[-0.9840087 , -0.17812043],
[ 0.17812043, -0.9840087 ]])
w = array([[ 1.12545297e+01, 1.48690140e+02],
[ 7.30718244e+00, 4.07840612e+02],
[ 2.62753065e+02, 2.27085711e+02],
[ 1.53045364e+01, 5.63025281e+02],
[ 8.00555079e+00, 2.16207407e+02],
[ 1.54070190e+01, 1.87213209e+06],
[ 2.71802081e+01, 1.06392902e+02],
[ 3.46300255e+01, 1.29404438e+03],
[ 7.77638140e+00, 4.18759293e+03],
[ 1.12874849e+01, 5.75023379e+02]])
f = array([[ 0.48907404, 0.06111084],
[-0.21899297, -0.02207311],
[ 0.58688524, 0.05156326],
[ 0.57407751, 0.10004592],
[ 0.94172351, 0.03895357],
[-0.7489003 , -0.08911183],
[-0.7043736 , -0.19014227],
[ 0.58950925, 0.16587887],
[-0.35557142, -0.14530267],
[ 0.24548714, 0.03221844]])