How Is Big Data Measured?

May 20, 2012 5:20 pm

We have definitely been hearing the term ‘Big Data’ quite frequently these days – but what scale are we really talking about? We had written some posts on how big cloud computing really is – we spotted some interesting stats on how big big data really is, not from a specifics standpoint, but from a scale perspective.

“Metric prefixes rule the day when it comes to defining Big Data volume. In order of ascending magnitude: kilobyte, megabyte, gigabyte, terabyte, petabyte, exabyte, zettabyte, and yottabyte. A yottabyte is 1,000,000,000,000,000,000,000,000 bytes = 10 to the 24th power bytes.

Big data can come fast. Imagine dealing with 5TB per second as Akamai does on its content delivery and acceleration network. Or, algorithmic trading engines that must detect trade buy/sell patterns in which complex event processing platforms such as Progress Apama have 100 microseconds to detect trades coming in at 5,000 orders per second.

Flavors of data can be just as shocking because combinations of relational data, unstructured data such as text, images, video, and every other variations can cause complexity in storing, processing, and querying that data. ”