I am using Sparse Matrices in Eigen and I observe the following behavior:
I have the following Sparse Matrices with Column Major storage
A [1,766,548 x 3,079,008]
with105,808,194
non-zero elements andB [3,079,008 x 1,766,548]
with9,476,108
non-zero elements
When I compute the dot product AxB
in takes almost 8 seconds.
When I want to compute transpose(Β) x transpose(A)
, the computation cost seems to increase a lot. In fact, this runs for about ~2,500 seconds.
Note that I load the transposed tables from files and I don't transpose them with Eigen.
I didn't expect the two approaches to have exactly the same computational cost but I don't really understand such a difference in execution time as in both approaches the two matrices have exactly the same number of non-zero elements.
I am using g++ 7.4 and Eigen 3.3.7