Modern mystical metaphors constantly remind us of the pervasiveness of computation. We are enveloped in a cloud of computing, as the internet of things will let you start the coffee pot from your phone and Big Data looms over every word we speak, every picture we take and soon every heartbeat we make.

Those heartbeats are the sound of the pumping of blood filled with essential oxygen and nutrients to every cell living cell in the body, each of which lies within only a few of its diameters to the nearest capillary. A vasculature of silicon innervates our daily lives analogously as we seem to never be farther than an arm's length from the vital flow of instructions that adds numbers and moves bits around in electronic memory. The execution of these sorts of simple instructions is what every program, be it a search engine or a space shuttle control system, is actually composed of. Computer chips that we call processors are what pump these instruction flows around us. It is worth taking a step back and reflecting on just what the "global compute flow", pumping at astronomical numbers of instructions every second, actually does for humanity at a global scale.

How Computation Is Measured

In order to figure out how to computers affect our daily lives we need to measure their activity. Measuring the amount of "computation" a chip does isn't trivial, but a common unit to use is Floating Point Operations Per Second (FLOPS). FLOPS are measured by running a specific set of math problems that are representative of doing the kind of intense work a processor is meant to do. It doesn't always make sense to measure computational activity in FLOPS (computers do things other than math problems) but we use fudge factors[flops2mips] to try to get things into common units. Now that we know how to measure computation, what do we actually do with it?

One may consider reflecting on their own life and try to break down the usage of their phone, computer at work etc. and guess that most computation deals with looking at photos, music playing and maybe some video playback. Those activities aren't terribly intense, more like a stroll for today's computer. How much computation is devoted to intensive things like nuclear simulation that (presumably) has something to do with keeping us safe from the real threat of nuclear warfare? Does that type of activity dwarf collective photo viewing on our phones? Let's take a global look at estimates of compute flow from different sources to try to understand this better.

Hilbert & López took a close look at this idea in 2011, some interesting things to consider:

In 1986 about 41% of the world's FLOPS happened on calculators.

Using the estimated Compounded Annual Growth Rates (CAGR) in the paper, we should be at about 4.1 x 10²⁰ FLOPS in the world right now (remember that is operations per second across all computers in the world).

Almost all of the 2007 compute flow (>97%) is through GPUs (Graphical Processing Units).

Most of 2007 FLOPS happened on personal computers and video game consoles. I'll bet mobile phones are growing in impact but still small relative to the global total.

Servers must be growing as well, but I still suspect they aren't nearly as significant in compute capacity as they are in storage.

Where FLOPS Come From

Hilbert & López give us some idea what was happening in 2007, but what about now? Things change quickly in the world of computers.

Let's graph FLOPS estimates from various present day sources to try and breakdown what the global compute flow might be today. It is important to not worry too much about the caveats behind some of these numbers, we are thinking in terms of powers of ten:

The left most column is the FLOP capacity of a modern video game console (the Playstation 4) and next to it is a single high-end GPU card for reference. The sum of the top 500 super computers FLOPS today is much less than 1% of the global total in 2007 (assuming Rmax) and about equal to the current total amount of distributed computing power on the BOINC platform that powers "@home" type programs like SETI. The FLOPS shown for all of Google is a just wild guess based on this post and (generously) doubling each year from 2012. This page says the the current bitcoin "hashrate" measured in FLOPS is already larger than our projected estimate of today's global total compute flow based on 2011 estimates of capacity and growth rates. While there could be a number of explanations for this apparent inconsistency, it more than likely represents the fact that FLOPS are not really a good way to measure bitcoin hashing (which is a very different type of computation from the benchmarks used to measure FLOPS). There is some double counting here since BOINC is used for bitcoin hashing. Take a look at the gist of the graph if you want to correct any of the numbers I've used. I've been pretty willy nilly in terms of capacity vs. realized compute flow but one would hope that they would be similar to an order of magnitude assuming broadly similar use profiles.

So, What Do We Do With All These FLOPS?

So just what are we doing with all those executing instructions today? I don't think anyone really knows but we can make some guesses and observations (assuming I haven't made a mistake in transcribing or interpreting my numbers):

Given the large contribution of video game consoles and PCs (with a presumed usage profile) most FLOPS in 2007 must have gone towards drawing triangles (the basic operation of rendering a single frame for a 3d video game). I think this is still true today.

Bitcoin hashing is quite significant, and I presume most of it is done on custom ASICs and not GPUs. As stated earlier the FLOPS number here may be misleading, but there is still a lot of computation being done for the purposes of block hashing for bitcoin. We can't discount the possibility that most computing is for bitcoin.

Even though there are more bitcoin hashing FLOPS than the projected current global compute flow, I suspect the growth rates have been higher than we thought they would be in 2011.

Hilbert & López suggested that most storage capacity is redundant. Assuming most compute flow is through Single Instruction Multiple Data (SIMD) GPUs, executing code must also be redundant in a sense. Another way of thinking about it is that the vast majority of instruction streams have many duplicates that operate on different input data.

Computer vision for "seeing" self driving cars seems imminent and compute intensive but how much will it contribute to total global compute flow? There are about a billion cars on the road today which is about double the number of current generation game consoles that have been sold. It will take awhile before half the word's cars become self driving and in the mean time, the number of game consoles will continue to grow. It seems that for the foreseeable future, there will be considerably more game consoles and PCs that are used for longer amounts of time than self driving cars. So I think we are still going to be drawing a lot of triangles for awhile.

It does seem likely that computer vision will become increasingly applied on mobile platforms outside of self driving cars. It is funny to think about the global compute flow shifting from getting the computer to draw for our eyes to getting it to see for them. The global compute flow may then bear an increased similarity to what is happening in our own brains, given a lot of what our brain does is related to vision (though it is hard to say exactly how much).

An important fudge factor Hilbert & López used was 3 MFLOPS : 1 MIPS or three "Floating Point Operation Per Second" to one (general) "Instruction Per Second". Let's assume that this is a reasonable thing to do for now but it's a bit like trying to measure the number of apples in units of oranges. ↩