I've always seen code execution speed measured either in units of time (e.g. t milliseconds), or using asymptotic analysis (e.g. O(n log n)). Execution speed will vary depending on hardware performance, and asymptotic analysis can tell us how code will perform relative to the size of the input, but they're not absolute terms.

For space performance, we have asymptotic analysis, but we can also measure performance in bytes, which allows us to express (and predict) space performance in both relative and absolute terms. e.g. algorithm X's space complexity is O(n) or n * 32 bytes of memory for implementation Y in language Z.

For example we can look at this code:

for i in n:
pass

And if we know this will be executed using a 64-bit build of CPython, we can say this for loop will take up 72 + n * 8 bytes for the integer array and 8 bytes for each reference (independent of context/overhead).

My question is: Is there a unit of measurement we can use to express a piece of code's execution speed (or CPU usage) in absolute terms, similar to how we can with bytes for memory?

$\begingroup$You can use physical time units such as seconds. Other common choices are FLOPs and the related CPU hours/years.$\endgroup$
– Yuval FilmusJul 23 at 22:36

1

$\begingroup$If you can describe and use a rigid model of computation such as Turing machine or RAM with arithmetic, you can use the number of operations, theoretically.$\endgroup$
– Apass.JackJul 23 at 23:47

3 Answers
3

I think the comparison is a bit clouded actually. Some unspoken concerns are that you can't claim python will compile to precisely 72+8n bytes by reading the high-level code. There's a virtual machine with untold optimization and overhead heavily dependent on memory layouts, CPUs, your OS, even your version of Python. Memory blocks might be allocated in fixed increments: 1KB page even if all you need is 100 bytes. Then within your fancy hyperthreaded CPU it might decide to pack bytes together, or rip them apart to run someone else's code along side yours. So how many physical bytes you use in the program can be far more than you expect you need. I think those issues seem to hit at the same ambiguity you considered with time.

In the literature on complexity these concerns are usually given with footnotes. That is we accept that asymptotics are necessary for modeling time and space because we cannot quantify the really fine details, and if we could, the results would be unusable.

So I think the answer to your question is no, there is no such unit, but that there isn't one for space either. It is important to remember that the role of modeling an algorithm is to know its behavior in broad strokes. If you model it so precisely that you can predict it to the fine detail chances are your model is as complex to evaluate as the thing it is modeling. We need models that simplify so we can get at identifying the big bottlenecks and not get stuck in the weeds. (Probably that is a theorem whose proof is basically the same as the proof that there exist Kolmogorov random sequences -- but I only speculate on that point.)

As a side remark, there are time-space complexities which measure the two together and these can come closer to modeling resource use in real life. These can be quantified with units of "power" (Watts) or in economic models as "cost" ($$ spent to run).

$\begingroup$I had never considered that time and space could be measured together in the context of algorithm design to model externalities like watts and dollars... that actually helps me tremendously with my original goal. Really helpful answer, thank you.$\endgroup$
– NightDriveDronesJul 24 at 5:34

Yes, and AFAIR you can find it even in classic Knuth book. It's the number of operations performed, usually split by operation type. For example, number-crunching algorithms are measured in terms of FP adds and multiplications performed. Sorting algorithms are measured in terms of comparisons and swaps, and so on.

Number of CPU cycles
When evaluating the speed of (cryptographic) procedures its common to refer to the absolute number of cycles of the CPU
E.g. BIKE Cipher Specification (see page 30).

In terms of complexity all procedures are supposed to be polynomial in the input length. Additionally, the input lengths vary a lot, so any asymptotic notion would be quite imprecise.
In terms of speed in seconds; this depends a lot on the underlying architecture. Since you want these applications to run on various systems, also not a good measurement.
Thus the number of CPU cycles seems a good option. However, I do not know if this unit of measurement is used in other areas.

$\begingroup$Number of CPU cycles is just number of seconds multiplied by the processor frequency. Yes, it doesn't depend on frequency alone, but still depends on CPU architecture as well as other factors (memory speed, cache size and organization...)$\endgroup$
– BulatJul 24 at 13:21