Is there a general way of measuring how many floating-point operations a sequence of Numpy commands involves?—commands such as np.random.randint
, np.sum
, np.argmin
, np.matmul
or @
, etc.
Or is the only way to do it manually, thinking from a pure mathematical standpoint and/or looking at how Numpy implements the functions, as follows:
- matrix multiplication involves (2p - 1)mn FLOPs, if we multiply a m × p matrix by a p × n matrix
- argmin involves O(n) ≈ cn comparisons for an array of length n, but what should c be? I tried looking at the Numpy source code but I'm fairly confused about how
_wrapfunc
is supposed to work or what C code is relevant here. - etc.