Can Futurology ‘Success’ be ‘Measured’?

Passions run high among sci-fi fans: never in this blog’s history has it been so necessary to note ‘this is only a bit of fun‘ in advance of a post!

How ‘good’ are we at predicting the future? More precisely, how could we measure how good anyone is at it? Well, we’re going irritate some straight away here by treating ‘serious’ academic researchers, professional ‘futurists’ and science fiction writers as doing essentially the same thing. Perhaps, on that basis, we should narrow our notion of science fiction to ‘hard’ sci-fi loosely set somewhere in humanity’s future; but, that aside, that’s exactly what we’re going to do.

And it’s justified to a considerable extent: hard, future-based sci-fi and professional futurism have much in common. They both have their work judged in the here-and-now by a mixture of ‘reason’ and ‘common sense’ (whatever those mean) and, afterwards, more reliably, by how things actually turn out. In fact there may be a greater gulf between the academic and commercial futurist than between them together and the sci-fi writer. Whereas the professional futurist may have objectives set by technological or economic constraints, a storyteller is free to use a scientific premise as a blank canvas for any wider social, ethical, moral, political, legal, environmental or demographic discussion. This is useful: as we’ve noted before in this blog, asking technologists their view on the future of technology makes sense; asking their opinions regarding its wider impact doesn’t. And, looking back, there certainly isn’t much evidence that the ‘fact-oriented’ writers have more reliable crystal balls than their fictional counterparts.

So, here, we define an alternative role, that of ‘futurologist’ to be any of these, whatever their technical or creative background or motivation may be. And here’s where the fun starts … and people start screaming, ‘no, you can’t DO that!’ … because we’re going to assume that a futurologist’s predictive success – or accuracy – may be loosely assessed by their performance across three broad categories: positives, false positives, and negatives, defined by three sets as follows:

Positives: predictions that have (to a greater or lesser extent) come to pass within any suggested time frame [the futurologist predicted it and it happened];

False Positives: predictions that have failed to transpire or have only done so in limited form or well beyond a suggested time frame [the futurologist predicted it but it didn’t happen];

Negatives: events or developments either completely unforeseen within the time frame or implied considerably out of context [the futurologist didn’t see it coming].

Obviously, positives are good but false positives and negatives are bad so we can crudely quantify this accuracy, the correctness of any attempt (f) at futurology, on the basis of these three sets, as:

Af = |Positives| / ( |Positives ∪ False positives ∪ Negatives| )

(where |S| is the size/cardinality of the set S: the number of elements it contains; and S ∪ T is the union of the S and T: their combined membership)

This gives us an ‘accuracy’ figure in the range 0 to 1, where 1 is perfection and 0 is … well, could anything be that bad? Anyway, the higher (closer to 1) the better. For our purposes, assuming simple (uncomplicated and uncontested) placement of predictions/outcomes into the three sets (with no overlap):

Af = |Positives| / ( |Positives| + |False positives| + |Negatives| )

Now, obviously, all these terms are vague, people will argue, they do overlap a bit really and they change over time but this might work, in a simple sense, as a snapshot. More worryingly though, the individual elements (predictions or lack of them) may not all be of the same significance (see later) but, for now, repeating our ‘this is only a bit of fun’ disclaimer, perhaps we could try it out ‘for real’ … or at least, ‘for fiction’?

Star Trek made its TV debut in 1966. Although five decades have now passed, we’re still over two centuries from its generally assumed setting in time. This makes most aspects of assessment of its futurology awkward but, as an exercise and example in the here-and-now, we can try:

Or ‘a bit better than half-right’ in crude terms: it’s crude anyway, of course because we’re reliant on having ‘thought of everything’ and some things may be disputed or more important than other things (still to come). But, taking it for what it is, a similar exercise can be attempted with other well-known sci-fi favourites such as Back to the Future and Star Wars or, with the added complexity of insincerity, Red Dwarf. (That’s left to the reader: this piece is going to be contentious enough anyway without trying to suggest that any one series with a particular fanatic cult following is somehow ‘better’ than another in this respect!)

In Star Trek’s case, the interesting, and particularly quarrelsome, category is the negatives. Did it really fail to predict modern, and still evolving, networking technology? Is there no Internet in Star Trek? There are certainly those who would defend it against such claims but such arguments generally take the form of noting secondary technology that could imply a pervasive global (or universal) communications network: there is no first-hand reference point anywhere. The ship’s computer, for example, clearly has access to a vast knowledge base but this always appears to be held locally. Communication is almost always point-to-point (or a combination of such connections) and any notion of distributed processing or resources is missing. Moreover, in many scenes, there is a plot-centred – clearly intentionally – sense of isolation experienced by its characters, which is incompatible with today’s understanding and acceptance of a ubiquitous Internet: on that basis alone, it is unlikely that its writers intended to suggest one. But, as already mentioned, a more important observation is that not all of these predictions are of the same significance. Is predicting a USB stick really the same as overlooking the Internet, for example? Can we take this on board?

Well, only if we’re prepared to make another set of judgements on the consequence of each individual prediction, of course. This would probably best be done by allocating a weight to each element. So, if we now define the original three sets (positives, false positives and negatives) as P = {1,2,…,p}, F = {1,2,…,f} and N = {1,2,…,n}, we also need to add their weightings: WP = (wp1,wp2,…,wpp}, WF = (wf1,wf2,…,wff} and WN = (wn1,wn2,…,wnn}. This gives a revised (weighted) formula for accuracy of:

Af = (wp1 + wp2 + …, + wpp) /

(wp1 + wp2 + …, + wpp + wf1 + wf2 + …, + wff + wn1 + wn2 + …, + wnn)

(With equal weights for all elements, this reduces, as it should, to the original, simple formula for Af so it’s up to us if we use the weighting option or not.)

Asking 90 Wrexham Glyndŵr University students across various Computing degrees for their opinions on the significance of each element (and thanks to them all for this help), on a 0-10 scale, then taking means, suggests these weights as:

… a slight change for the worse. In other words, the measure of accuracy goes down when we consider the perceived importance of each prediction (or failure to make a prediction). Does this mean Star Trek ‘got the simple stuff right but not the hard bits’? (Or is it just in the nature of things that the elements in set N are going to be weighted more heavily than the other two because of its ‘out-of-the-blue’ nature? Could this bias be weighted into the P, F & N sets at the top level? Or maybe we should calculate the comparison with other sci-fi – or wider futurology – attempts to get a clearer picture?) Finally, we can note that our A calculation, for any f, will change over time. For any given f, increasing real world certainty of outcomes will potentially enlarge the sets P and N but shrink F. So for which fs will Af get ‘better’ and which ‘worse’? Whose predictions are ‘on the way up’ and whose ‘on the way down’?

[Runs away, ducking for cover, shouting, “remember this is only a bit of fun”!]