urn:lsid:ibm.com:blogs:entries-eff9becc-b1a3-4cb8-bfb2-619b2a0bfc2eInside System Storage -- by Tony Pearson - Tags - aldc Inside System Storage -- by Tony Pearson03012015-05-28T23:52:22-04:00IBM Connections - Blogsurn:lsid:ibm.com:blogs:entry-736a005c-4235-4caa-b665-e57e7ed323c9Storage Efficiency versus Data ReductionTonyPearson120000HQFFactivefalseComment EntriesLikestrue2010-08-20T17:53:46-04:002010-08-20T17:53:46-04:00<p>
Wrapping up my week's theme of storage optimization, I thought I would help clarify the confusion between data reduction and storage efficiency. I have seen many articles and blog posts that either use these two terms interchangeably, as if they were synonyms for each other, or as if one is merely a subset of the other.
</p>
<dl>
<dt><b>Data Reduction is LOSSY</b></dt>
<dd><p>
By &quot;Lossy&quot;, I mean that reducing data is an irreversible process. Details are lost, but insight is gained. In his paper, [<a href="http://www.iasri.res.in/ebook/EB_SMAR/e-book_pdf%20files/Manual%20II/9-data_reduction.pdf">Data Reduction Techniques&quot;</a>, Rajana Agarwal defines this simply:</p>
<blockquote>
&quot;Data reduction techniques are applied where the goal is to aggregate or amalgamate the information contained in large data sets into manageable (smaller) information nuggets.&quot;
</blockquote>
<p>
Data reduction has been around since the 18th century.
</p>
<table><tbody><tr><td>
<a href="http://www.flickr.com/photos/26449036@N06/4910722187/" title="iw_histogram by az990tony, on Flickr"><img alt="iw_histogram" height="240" src="http://farm5.static.flickr.com/4119/4910722187_7273f471f1_m.jpg" width="206" /></a>
</td>
<td>
<p>Take for example this histogram from [<a href="http://searchsoftwarequality.techtarget.com/sDefinition/0,,sid92_gci330729,00.html">SearchSoftwareQuality.com</a>]. We have reduced ninety individual student scores, and reduced them down to just five numbers, the counts in each range. This can provide for easier comprehension and comparison with other distributions.</p>
<p>
The process is lossy. I cannot determine or re-create an individual student's score from these five histogram values.
</p>
</td></tr></tbody></table>
<table><tbody><tr><td>
<p>This next example, complements of [<a href="http://en.wikipedia.org/wiki/File:Linear_regression.png">Michael Hardy</a>], represents another form of data reduction known as [<a href="http://en.wikipedia.org/wiki/Regression_analysis">&quot;linear regression analysis&quot;</a>]. The idea is to take a large set of data points between two variables, the x axis along the horizontal and the y axis along the vertical, and find the best line that fits. Thus the data is reduced from many points to just two, slope(a) and intercept(b), resulting in an equation of y=ax+b.</p>
<p>
The process is lossy. I cannot determine or re-create any original data point from this slope and intercept equation.
</p>
</td><td>
<a href="http://www.flickr.com/photos/26449036@N06/4910744711/" title="Linear_regression by az990tony, on Flickr"><img alt="Linear_regression" height="166" src="http://farm5.static.flickr.com/4138/4910744711_c668a93172_m.jpg" width="240" /></a>
</td></tr></tbody></table>
<table><tbody><tr><td>
<a href="http://www.flickr.com/photos/26449036@N06/4911390646/" title="ibm-stock-2010 by az990tony, on Flickr"><img alt="ibm-stock-2010" height="205" src="http://farm5.static.flickr.com/4141/4911390646_1163f7ea90_m.jpg" width="240" /></a>
</td>
<td>
<p>In this last example, from [<a href="http://finance.yahoo.com/">Yahoo Finance</a>], reduces millions of stock trades to a single point per day, typically closing price, to show the overall growth trend over the course of the past year.</p>
<p>
The process is lossy. Even if I knew the low, high and closing price of a particular stock on a particular day, I would not be able to determine or re-create the actual price paid for individual trades that occurred.
</p>
</td></tr></tbody></table>
</dd>
<dt><b>Storage Efficiency is LOSSLESS</b></dt>
<dd><p>
By contrast, there are many IT methods that can be used to store data in ways that are more efficient, without losing any of the fine detail. Here are some examples:
</p><ul>
<li><b>Thin Provisioning: </b>Instead of storing 30GB of data on 100GB of disk capacity, you store it on 30GB of capacity. All of the data is still there, just none of the wasteful empty space.
</li><li><b>Space-efficient Copy:</b> Instead of copying every block of data from source to destination, you copy over only those blocks that have changed since the copy began. The blocks not copied are still available on the source volume, so there is no need to duplicate this data.
</li><li><b>Archiving and Space Management: </b>Data can be moved out of production databases and stored elsewhere on disk or tape. Enough XML metadata is carried along so that there is no loss in the fine detail of what each row and column represent.
</li><li><b>Data Deduplication: </b>The idea is simple. Find large chunks of data that contain the same exact information as an existing chunk already stored, and merely set a pointer to avoid storing the duplicate copy. This can be done in-line as data is written, or as a post-process task when things are otherwise slow and idle.<p />
<p>
When data deduplication first came out, some lawyers were concerned that this was a &quot;lossy&quot; approach, that somehow documents were coming back without some of their original contents. How else can you explain storing 25PB of data on only 1PB of disk?
</p>
<blockquote>
(In some countries, companies must retain data in their original file formats, as there is concern that converting business documents to PDF or HTML would lose some critical &quot;metadata&quot; information such as modificatoin dates, authorship information, underlying formulae, and so on.)
</blockquote>
<p>
Well, the concern applies only to those data deduplication methods that calculate a hash code or fingerprint, such as EMC Centera or EMC Data Domain. If the hash code of new incoming data matches the hash code of existing data, then the new data is discarded and assumed to be identical. This is rare, and I have only read of a few occurrences of unique data being discarded in the past five years. To ensure full integrity, IBM ProtecTIER data deduplication solution and IBM N series disk systems chose instead to do full byte-for-byte comparisons.
</p>
</li><li><b>Compression: </b>There are both lossy and lossless compression techniques. The lossless Lempel-Ziv algorithm is the basis for LTO-DC algorithm used in IBM's Linear Tape Open [<a href="http://en.wikipedia.org/wiki/Linear_Tape-Open">LTO</a>] tape drives, the Streaming Lossless Data Compression (SLDC) algorithm used in IBM's [<a href="http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&amp;infotype=an&amp;appname=iSource&amp;supplier=897&amp;letternum=ENUS108-493">Enterprise-class TS1130</a>] tape drives, and the Adaptive Lossless Data Compression (ALDC) used by the IBM Information Archive for its disk pool collections.<p />
<p>
Last month, IBM announced that it was [<a href="http://www-03.ibm.com/press/us/en/pressrelease/32219.wss">acquiring Storwize</a>. It's Random Access Compression Engine (RACE) is also a lossless compression algorithm based on Lempel-Ziv. As servers write files, Storwize compresses those files and passes them on to the destination NAS device. When files are read back, Storwize retrieves and decompresses the data back to its original form.
</p>
<p>
To read independent views on IBM's acquisition, read Lauren Whitehouse (ESG) post [<a href="http://www.enterprisestrategygroup.com/2010/07/remove-another-chair-ibm-snatches-storwize/">Remote Another Chair</a>, Chris Mellor (The Register) article [<a href="http://www.theregister.co.uk/2010/07/29/ibm_buys_storwize/">Storwize Swallowed</a>], or Dave Raffo (SearchStorage.com) article [<a href="http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1517552,00.html">IBM buys primary data compression</a>].
</p>
<p>
As with tape, the savings from compression can vary, typically from 20 to 80 percent. In other words, 10TB of primary data could take up from 2TB to 8TB of physical space. To estimate what savings you might achieve for your mix of data types, try out the free [<a href="http://storwize.com/ROI_tool.asp">Storwize Predictive Modeling Tool</a>].</p>
</li></ul></dd></dl>
<p>
So why am I making a distinction on terminology here? <br /></p><p>Data reduction is already a well-known concept among specific industries, like High-Performance Computing (HPC) and Business Analytics. IBM has the largest marketshare in supercomputers that do data reduction for all kinds of use cases, for scientific research, weather prediction, financial projections, and decision support systems. IBM has also recently acquired a lot of companies related to Business Analytics, such as Cognos, SPSS, CoreMetrics and Unica Corp. These use data reduction on large amounts of business and marketing data to help drive new sources of revenues, provide insight for new products and services, create more focused advertising campaigns, and help understand the marketplace better.</p>
<p>
There are certainly enough methods of reducing the quantity of storage capacity consumed, like thin provisioning, data deduplication and compression, to warrant an &quot;umbrella term&quot; that refers to all of them generically. I would prefer we do not &quot;overload&quot; the existing phrase &quot;data reduction&quot; but rather come up with a new phrase, such as &quot;storage efficiency&quot; or &quot;capacity optimization&quot; to refer to this category of features. <br /></p><p>IBM is certainly quite involved in both data reduction as well as storage efficiency. If any of my readers can suggest a better phrase, please comment below.
</p>
<p><img src="http://www.ibm.com/developerworks/blogs/resources/InsideSystemStorage/technorati.gif" /><b>technorati tags:</b> <a href="http://www.technorati.com/tags/IBM" rel="tag">IBM</a>, <a href="http://www.technorati.com/tags/data+reduction" rel="tag">data reduction</a>, <a href="http://www.technorati.com/tags/storage+efficiency" rel="tag">storage efficiency</a>, <a href="http://www.technorati.com/tags/histogram" rel="tag">histogram</a>, <a href="http://www.technorati.com/tags/linear+regression" rel="tag">linear regression</a>, <a href="http://www.technorati.com/tags/thin+provisioning" rel="tag">thin provisioning</a>, <a href="http://www.technorati.com/tags/data+deduplication" rel="tag">data deduplication</a>, <a href="http://www.technorati.com/tags/lossy" rel="tag">lossy</a>, <a href="http://www.technorati.com/tags/lossless" rel="tag">lossless</a>, <a href="http://www.technorati.com/tags/EMC" rel="tag">EMC</a>, <a href="http://www.technorati.com/tags/Centera" rel="tag">Centera</a>, <a href="http://www.technorati.com/tags/hash+collisions" rel="tag">hash collisions</a>, <a href="http://www.technorati.com/tags/Information+Archive" rel="tag">Information Archive</a>, <a href="http://www.technorati.com/tags/LTO" rel="tag">LTO</a>, <a href="http://www.technorati.com/tags/LTO-DC" rel="tag">LTO-DC</a>, <a href="http://www.technorati.com/tags/SLDC" rel="tag">SLDC</a>, <a href="http://www.technorati.com/tags/ALDC" rel="tag">ALDC</a>, <a href="http://www.technorati.com/tags/compression" rel="tag">compression</a>, <a href="http://www.technorati.com/tags/deduplication" rel="tag">deduplication</a>, <a href="http://www.technorati.com/tags/Storwize" rel="tag">Storwize</a>, <a href="http://www.technorati.com/tags/supercomputers" rel="tag">supercomputers</a>, <a href="http://www.technorati.com/tags/HPC" rel="tag">HPC</a>, <a href="http://www.technorati.com/tags/analytics" rel="tag">analytics</a></p>
Wrapping up my week's theme of storage optimization, I thought I would help clarify the confusion between data reduction and storage efficiency. I have seen many articles and blog posts that either use these two terms interchangeably, as if they were synonyms...0011042urn:lsid:ibm.com:blogs:entries-eff9becc-b1a3-4cb8-bfb2-619b2a0bfc2eInside System Storage -- by Tony Pearson2015-05-28T23:52:22-04:00