‘THAT Won’t Happen!’

In 1977, Ken Olsen, founder and CEO of DEC, made an oft-misapplied statement, “there is no reason for any individual to have a computer in his home”. A favourite of introductory college computing modules – supposedly highlighting the difficulty in keeping pace with rapid change in computing technology, it appears foolish at a time when personal computers were already under development – including in his own laboratories. The quote is out of context, of course, and applies to Olsen’s scepticism regarding fully-automated assistive home technology systems (climate control, security, cooking food, etc.). However, as precisely these technologies gain traction, there may be little doubt that, if he stands unfairly accused of being wrong in one respect, time will inevitably prove him so in another.

Remember, a futurologist’s predictive success – or accuracy – might be loosely assessed by their performance across three broad categories: positives, false positives, and negatives, defined then as follows:

Positives: predictions that have (to a greater or lesser extent) come to pass within any suggested time frame [the futurologist predicted it and it happened];

False Positives: predictions that have failed to transpire or have only done so in limited form or well beyond a suggested time frame [the futurologist predicted it but it didn’t happen];

Negatives: events or developments either completely unforeseen within the time frame or implied considerably out of context [the futurologist didn’t see it coming].

In fact, the assumption at the time was that such a ‘false negative’ could be lumped in with either the ‘false positives’ or the ‘negatives’. Saying that something wouldn’t happen (but then it did) was similar to either saying would happen (but it didn’t: just the logical opposite) or not seeing it coming in the first place: the foresight that it might (but wouldn’t) was the same as ignoring it entirely: in both cases, the resultant prediction was for a world without whatever it was! (And somehow that was wrong!)

But, taking into account that the validity of predictions across these different categories changes over time, together with the notion (from before) that some of these things might be more significant than others, perhaps these ‘false negatives’ deserve a set of their own. So, extending our previous classes, we have:

Positives: predictions that have (to a greater or lesser extent) come to pass within any suggested time frame [the futurologist predicted it and it happened];

False Positives: predictions that have failed to transpire or have only done so in limited form or well beyond a suggested time frame [the futurologist predicted it but it didn’t happen];

Negatives: events or developments either completely unforeseen within the time frame or implied considerably out of context [the futurologist didn’t see it coming];

False Negatives: events or developments considered but dismissed in context [the futurologist said it wouldn’t happen].

So, if we now redefine the original three sets to become four (positives, false positives, negatives and false negatives) as P = {1,2,…,p}, FP = {1,2,…,fp}, N = {1,2,…,n} and FN = {1,2,…,fn} we also need to add their weightings: WP = (wp1,wp2,…,wpp}, WFP = (wfp1,wfp2,…,wfpf}, WN = (wn1,wn2,…,wnn} and WFN = (wfn1,wfn2,…,wfnn}. This gives a revised (weighted) formula for accuracy of:

Feel free to play around with these figures for your favourite sci-fi writer or programme (more probably to follow here soon) but it’s possible that, for the particular example given, it doesn’t change anything?