Duplicate Data Storage: the hidden costs

The promise of unlimited resources is one of the drivers behind increased uptake of cloud storage services, as businesses struggle to budget for exponentially increasing storage demands. But it’s also an easy out for businesses unwilling or unable to grapple with the thorny issue of duplicate data.

The problems caused by duplicate data are further exacerbated by the relatively low cost of storage. It is far easier to keep adding new capacity on and off site, than it is to de-duplicate the data being stored.

But this creates a false economy – simply having a copy of every item somewhere on the network actually creates more problems than it solves. Take the issue of employee productivity for instance.

According to the EMC sponsored whitepaper The Expanding Digital Universe, employees spend 9.6 hours every week searching for the data they need to do their respective jobs. EMC estimate that this wasted time costs a 1000 seat organization at $5.3 million annually in lost productivity.

Some of this wasted time is down to the use of decentralised, unstructured data, and software platforms that do not properly communicate with each other. But in environments where employees have the ability to create and maintain multiple data silos at individual, business unit and corporate levels, the potential for uncontrolled data duplication is huge.

And it is searching through these multiple duplicates to find the ‘correct’ version that kills so much employee productivity; a full day every week is spent tracking down files and data.

By enabling further data duplication with unlimited storage, the problem simply gets bigger. As do the associated financial and productivity losses.

CDS will be publishing a whitepaper looking at the on-going challenges of data duplication in the New Year. In the meantime, please get in touch with the CDS team to discuss how we can help you better manage your use of storage and duplicate data.