I'm working on a numpy matrix adj which represent the adjacency matrix of some networkx graph. When I construct adj as follows:
adj = sparse.csr_matrix(nx.adjacency_matrix(graph), dtype='longdouble').todense()
and later run adj = adj ** 2
, then I can see in htop
that numpy uses all available threads.
However, because of the precision loss, I attempted to integrate mpmath
somewhere in between.
I did it like this:
mp.dps = 120
adj = sparse.csr_matrix(nx.adjacency_matrix(graph), dtype='longdouble').todense()
# ... just like before
adjmp = mp.matrix(adj)
# this casts all values to mpf
adj = np.matrix(adjmp, dtype=object)
# and get back the np matrix, now with mpfs inside
The resulting adj looks like this
matrix([[mpf('0.0'), mpf('0.0'), mpf('0.0'), ..., mpf('0.0'), mpf('0.0'),
mpf('0.125')], # [...]
which is what I expect.
The computation consists of two steps: the first is squaring adj and the second is the actual computation. From the results, I can see that the precision is much greater, but htop
shows that the squaring step is running only on one thread, for some reason.
When I run np.show_config(), I get:
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]