How much energy does it cost to perform a one-way hash? Something in the SHA2 family, because we want some level of security.

How much energy does it cost to produce one ECDSA signature on 1k of data?

How much energy does it cost to verify an ECDSA signature once on 1k of data?

How much energy does it cost to send a TCP message of 1k of data from New York, US to London, UK.

We want these numbers to be reasonably hardware and implementation independent, which means they're going to be fairly fuzzy. Even without that constraint, asking about the performance, by any metric, of network calls between heterogeneous clients is going to be a very imprecise exercise. What we're really looking for is a lower-bound, maybe coupled with an average case measurement on the operations we mentioned above.

First, lets see if we can get a cache hit out of this. I highly doubt I'm the first person ever wrestling with this question (although that would be pretty cool, in all honesty). A cursory google search gets me

The last of these is behind a paywall but conveniently, because the paper focuses on ECC implementations, the energy consumption information I'm after is present in the cleartext abstract. According to it, the energy cost of a signature and verification is 46 μJ (for a process that takes 1.91 s on a chip with a clock frequency of 7.37 MHz). It's not strictly speaking relevant, but the same abstract also claims that ECDH can be executed for 42 μJ at 1.75 s on the same setup.

So we've got a first datapoint there, in any case. It's hardware and presumably implementation-specific, and the experimental process isn't outlined in the part of the paper I can read, so there's no telling how accurate this is, but it's a start (and it's not as though I'm about to print answers without verifying anyhow).

Ok; so we want to see the energy costs of some SHA256 implementation, ECDSA signing and verification, and a TCP message. This sounds like a job for a profiler of some sort. Or rather, kinda. A profiler will tell us how much memory and compute is used, but not necessarily how much juice. So we'll need to figure something out. My gut reaction says to use this as an excuse to learn about profiling in Clojure, but realistically, we'll want to do similar things against multiple implementations (and on multiple machines). So, here we go, off the top of my head,

tufte is a pretty good profiling library, clj-digest is an implementation of some digest hash functions, and clj-pgp is a library that gives us access to ECDSA signing/verification via BouncyCastle. I've included clj-sockets as the TCP implementation, but haven't actually done anything with it yet. Also, since I'm on Debian, I can poke at sysfs to get battery statistics and hopefully back out energy costs from there.

Force the input strings sequence. (Clojure is sometimes lazy, and I don't want it caching digest results. It would improve performance, but wouldn't end up giving me an accurate cost model. So instead of serially calling sha256 or sign/verify on the same input string, we're generating a long-assed sequence of inputs, forcing it by hitting it with count, then calling the appropriate crypto functions on each one in turn)

Hmph. I guess forcing the full list up-front is kind of memory intensive. I still don't really want to incur the overhead of generating this list in-line with the test though. I guess risking cache is the lesser evil for now? Or at least, lets do both evils and see where we can factor them out.

Ok, that still only gives us half the story. It tells us about how much energy ECDSA and SHA256 take out of this equation. There's another component we wanted to discuss, which is the TCP reads/writes involved. After a few commits which I won't rehash here, we can take a stab at answering that question.

Ok, so far so good. You'll notice I'm also doing this on a laptop with a different battery configuration; I do like being at least minimally complete. Upping the count past 100k in this case runs my machine out of memory, possibly because of all the dangling sockets I'm leaving around. Which kind of sucks, but we can get better data than 10k, at least.

One thing this unfortunately tells me is exactly how garbage this battery is1. But it also shows pretty clearly what we can expect in terms of power draw without doing any work.

Ok, now. Because I'm not an engineer, I get to crib from the internet about the actual calculations once this data gathering is complete. Specifically, here's how to calculate the Joules output by a battery, and the spec sheet on Lenovo laptop batteries has numbers on their output voltage. Specifically, 11.4V.

acpi -bi tells me that my primary battery on this machine is full at around 4440mAh. I say "around", because output varies slightly on each call, which tells me that this is an estimate

Around 3kJ for 10 minutes, and about 500J for one? Which I guess might mean that a battery's drain accelerates as it runs out of juice? Awesome. So current charge of the battery being used is also a variable we need to account for if we want to be accurate. The output of this process is going to be a lot fuzzier than I initially expected.

Checking this against the 1-minute-ish tests of ecdsa-verify, ecdsa-sign, sha256 and tcp-send operations, which only wore down charge by 1%, it looks like the cost of these operations is miniscule. As in, with the accuracy of the monitoring equipment I've got, the cost for 1 minutes' worth of compute seems like it dominates the marginal cost of constantly hashing things and sending them out over TCP.

By causing a bunch more work; 22 minutes to be exact, we can start to see the effects that separate caused work from backgroud work. Based on our earlier measuerments, battery use isn't linear either, so I'm going to want a fresh 22-minute sample of "idle" battery drain. Deferring to an external timer, we get

Ok, so the amount of battery drain we can attribute to actual ECDSA/SHA operations in this 20-minute test is around 2530 J. If we assume that compute time is linear on battery drain, which is probably reasonable enough for our purposes, we can determine how much of that battery drain goes to each operation by assigning it percentage-wise. So plugging in our earlier statistics tells us

And since we know that each operation was called 1000000 in our clocking trial, we can guess how much a single operation consumed. It's on the order of millijoules; 1.65mJ for verification, 0.7mJ for signing and 0.1mJ for hashing using Sha256.

Tadaa, I guess.

The last thing to do is run our TCP profiling scheme to the same level, but since the Clojure sockets implementation is running me out of memor locally, I think this is going to call for hacking in something else. While I'm at it, I may as well add to the above data by taking measurements in different environments. I'm thinking Common Lisp next, and I'll let you know how it ends up going.

Which is disappointing, because I've been trying to successfully replace my x220i for something like seven years at this point. Laptops with better memory, better battery performance, better durability and at least equivalent linux driver support apparently don't exist.↩