You are sum-reducing/collapsing the columns off A
using L
for selecting those columns. Also, you are updating the columns of output array based on the uniqueness of L
elems.
Thus, you can use np.add.reduceat
for a vectorized solution, like so -
sidx = L.argsort()
col_idx, grp_start_idx = np.unique(L[sidx],return_index=True)
B_out = np.zeros((len(A), n_cols))
B_out[:,col_idx] = np.add.reduceat(A[:,sidx],grp_start_idx,axis=1)
Runtime test -
In [129]: def org_app(A,n_cols):
...: B = np.zeros((len(A), n_cols))
...: for i, a in enumerate(A.T):
...: B[:, L[i]] += a
...: return B
...:
...: def vectorized_app(A,n_cols):
...: sidx = L.argsort()
...: col_idx, grp_start_idx = np.unique(L[sidx],return_index=True)
...: B_out = np.zeros((len(A), n_cols))
...: B_out[:,col_idx] = np.add.reduceat(A[:,sidx],grp_start_idx,axis=1)
...: return B_out
...:
In [130]: # Setup inputs with an appreciable no. of cols & lesser rows
...: # so as that memory bandwidth to work with huge number of
...: # row elems doesn't become the bottleneck
...: d,n_cols = 10,5000
...: A = np.random.rand(d,n_cols)
...: L = np.random.randint(0,n_cols,(n_cols,))
...:
In [131]: np.allclose(org_app(A,n_cols),vectorized_app(A,n_cols))
Out[131]: True
In [132]: %timeit org_app(A,n_cols)
10 loops, best of 3: 33.3 ms per loop
In [133]: %timeit vectorized_app(A,n_cols)
100 loops, best of 3: 1.87 ms per loop
As the number of rows become comparable with the number of cols in A
, the high memory bandwidth requirements for the vectorized approach would offset any noticeable speedup from it.