2

Is there a general way of measuring how many floating-point operations a sequence of Numpy commands involves?—commands such as np.random.randint, np.sum, np.argmin, np.matmul or @, etc.

Or is the only way to do it manually, thinking from a pure mathematical standpoint and/or looking at how Numpy implements the functions, as follows:

  • matrix multiplication involves (2p - 1)mn FLOPs, if we multiply a m × p matrix by a p × n matrix
  • argmin involves O(n) ≈ cn comparisons for an array of length n, but what should c be? I tried looking at the Numpy source code but I'm fairly confused about how _wrapfunc is supposed to work or what C code is relevant here.
  • etc.
mic
  • 1,190
  • 1
  • 17
  • 29
  • 1
    What's the point? The flops is only part of the computation - code interpretaion, function calling, making objects (array and otherwise), memory allocation, copying buffers, garbage collection all affect the total time. – hpaulj Jun 10 '20 at 05:38
  • Those are some good points. It's to compare performance to a paper that measures performance in number of floating-point operations, and it's a measure that would be more transferable across computers, compared to directly measuring time. – mic Jun 15 '20 at 02:22
  • My answer to this `O` question might be relevant, https://stackoverflow.com/q/52201990/901925 – hpaulj Jun 19 '20 at 00:34

0 Answers0