Is data the future? IBM thinks so, but it all depends on how that data is identified, analysed and used

Posted on 9th December 2015 by YmeriHart

Share this article:

IBM has recently announced that it has agreed to buy weather.com for a reported $2bn.

Leaving aside all the puns about ‘cloud-based’ systems, the question is, as Fortune magazine asks: “Why would Big Blue [as IBM is known], purveyor of mainframe computers and business software, acquire a company that brought us Hurricane Sandy coverage?”

Its answer: “One word: Data.” As the writer explains: “IBM is hell-bent on becoming a data-based company. It has been betting that its Watson data crunching service … will eventually offset the declines in its traditional software business.”

IBM believes companies will want to spend money on more accurate weather forecasting: whether its supermarkets deciding to increase their orders for salads and bbq food for the weekend, or airlines seeking to avoid turbulence, or more long-term aims such as helping insurance companies access risk.

Of course, a multitude of weather forecasts and climate data is already only a Google search away, so IBM’s Watson artificial intelligence programme will have to produce the exact data which each customer values enough to pay money for.

And this is the essence of the issue. Gathering data has never been easier. It is finding the relevant quality data and then using all the right tools to produce coherent and concise analysis which matters.

The same challenges are evident in the logistics business. Supply chain visibility is seen as the Holy Grail – we are told that all we need is more visibility to monitor and record every movement of every product and all our problems will be solved.

Real-time visibility is one aspect, and the advantages of being able to check PoDs instantly or to identify delays so intervention can be made and/or customers can be warned, are obvious.

But these highly sophisticated systems also generate huge amounts of data that is stored and can be accessed later. The question arises – is it the ‘right’ data, and what is done with it? Is there too much data so that, in the end, both shipper and transport provider are so overwhelmed that they just ignore it once a bottleneck or two have been identified and solved.

An analysis by Softship Data Processing found that 140,000 pieces of information are generated when one container ship (admittedly one of the large 15,000 TEU vessels) visited one port. From quoting and booking, to Customs and security data, to tracking the movement of the container from the port gate to loading on the ship, up to 25 different parties were involved in generating and collecting data.

If that is the amount of information for just one ship in one port, imagine the total data that could potentially be generated for each global shipment – from sourcing and procurement to final delivery and invoicing.

Decades ago we talked about ‘information overload’ – and that was before the rise of the internet and email. Like IBM, we all understand the potential power of data but, we have to make sure that volume does not cloud the real issue – extracting the data that matters and presenting it in a way that ensures businesses can make informed decisions that help them prosper.