Monthly Archives: January 2013

When dealing with an endless and dynamic flow of space-time data, how does one determine what represents an important change, or even where on the surface of the earth to train one’s attention in the first place? Increasingly, mankind will be relying on analytic sensemaking engines to suggest and direct human attention. Will these engines be right? Or, will they constantly misdirect human attention (false positives) leaving the most important discoveries out of sight?

Detecting insight and actionable relevance requires access to a wide observation space; an ability to contextualise the available observation space; and have principles by which one can assess opportunity and risk — enabling the triage of relevance.

For a moment, imagine looking out your kitchen windows only to witness your neighbours in an epic argument. The next day you see the husband at the store purchasing a firearm. Four days later, late night while trying to fall asleep you can’t help but notice a somewhat muffled ‘bang’ sound from outside. The next morning, while pulling out in your car for work, you see the neighbour laboring as he drags what looks like a few blankets filled with heavy stuff towards his pickup truck.

Insight adds up
The point is, insight adds up. Take any one or two of these observations independently and there would be very little basis for alarm. However, the combination of these insights would cause any alert human being to at least raise an eyebrow.

Sounds easy. But this innate capability of human beings to piece together such diverse observations over space and time – incrementally accumulating context – has been difficult to replicate in machines. Just ask an organisation running a risk assessment system with queues growing faster than their workforce can keep up – overwhelmed by false positives. Now imagine feeding these processes substantially more data. In fact, the thought of having to also process the merging ‘big data’ to these existing processed forces one to stand back for a moment and ask; “How many more false positives can we afford?”

The only way to wrestle big data to the ground involves first placing information into context. In the same way, puzzle pieces mean more when attached to other puzzle pieces, big data in context makes it possible to lower false positives and false negatives at the same time. No surprise, as more puzzle pieces come to form the picture – the more precise the understanding of the big picture (risk or opportunity).

The contextualisation of diverse data sources has seen some gains over the last few decades. For example, entity resolution systems allow machines to determine with great certainty that both transactions were carried out by the same person. Alternatively, little gain has been made in the area of video or imagery, when it comes to classifying an object and determining with certainty that is the same entity as seen in previous observations over the same, secondary, or tertiary data sources.

The game changers
Fortunately, big breakthroughs are afoot as space and time move from being a means to correctly place symbols on maps or to conduct spatial analysis, to being the magic bits of which computers will use to contextualise very diverse observations over time. In the story about the neighbour with the argument, gun, bang and dead weight, the space and time of these observations are in fact the “highest order bits,” aiding one’s ability to estimate the big picture.

As more sensors produce more accurate geospatial data about where things are and how they move, the speed and accuracy of context accumulating processes will be a game changer for machine triage and attention directing systems.

Beyond space and time points that demonstrate a point-in-time presence, the motion of entities themselves is telling. Imagine the journey of a cargo container ship. Tick tick tick as it moves along over the surface of the water — following a recurring, predictable route optimised for fuel conservation and time. Then it reaches a port and begins to hang out (hover). Tick tick tick as it is observed to remain in one place. Over a period of time, one discovers that most vessels have a finite number of “hangouts”. In fact, the collection of frequent hangouts strung together can be thought of as a pattern of life or “life arc.”

Artifacts such as hangouts and life arcs might be useful for projections on maps for human presentation, but data points such as these are pure super-food to context-accumulating, sense-making systems.

Let’s face it. There are not going to be enough humans to ask every smart question every day. And while this is true today, tomorrow, thanks to the big data phenomenon, it will be become orders of magnitude more difficult to make sense of all this data in the future. A new paradigm is needed.

The future: The data must find the data and the relevance must find you
How will the data find the data? For starters, diverse observations must be co-located into a shared space. Then one must integrate such diverse observations as they happen, fast enough to do something about it while it is still happening. In both cases, more diverse data, co-located, placed in context (organised fundamentally in terms of space and time) will prove to deliver unprecedented advances in understanding, whether this involves detecting actionable relevance or whether it enables one to deliver materially better story telling.

Analytic exploitation of the space-time features will usher in advances in high-quality prediction systems. This happens when diverse data converges in ways only possible with space and time alignment. What follows is better context, better understanding, and superior sensemaking, which in turn enables better business and mission outcomes.