I'm writing a paper on an algorithm I tested. It used 17 more kB of memory and 0.1 second less of CPU time when compared to a control counterpart. I'm confused as to how I can compare these two different fields to make an assertion in favor of one's efficiency over the other. I understand that I'm comparing apples and oranges here, but is there an objective way I could explain why one algorithm would be better than the other?
2 Answers
One principled way of doing it would be look at it in terms of "megabyte seconds" or something like that, i.e., a process that runs for 2 second and takes 50 MB consumes 100 megabyte seconds.
So by plugging in the before/after values for your total memory usage and runtime, you can see if the process is less intensive in megabyte seconds or not. If it is less, you can make the argument that in addition to being faster it is less intensive in memory use, in some sense (e.g., if you are memory bound and running in the cloud, you basically pay for megabyte seconds).

- 60,350
- 16
- 207
- 386
is there an objective way I could explain why one algorithm would be better than the other?
Different use-cases and/or different hardware with different cache sizes/speeds might favour a different speed/space tradeoff.
(Using more scratch space dirties more cache, slowing down other code if there would otherwise have been cache hits).
You can certainly say that you improve the time performance on real hardware at a cost of some extra space, though.
Typical speed/space tradeoff considerations on current hardware certainly favours your new algorithm: 17kiB is very small; smaller than the L1d caches in modern mainstream x86 CPUs.
e.g. you can make the simplistic argument that 17kiB is a small amount of space compared to how much memory a modern CPU can access in 0.1 seconds. (100M nanoseconds). That's not a great argument, though.

- 328,167
- 45
- 605
- 847