What Is Big Data?

Do you think of mainframes? Data warehouses? Do you think of Oracle Grids, Exadata, or Teradata clusters?

Perhaps you think of Hadoop, MongoDB, Cassandra, or CouchDB? Or maybe it's any NoSQL database?

Or perhaps you think it's just a giant mass of data in one place?

If you read press articles on big data, then it's all of these things. It's my belief that no good definition of big data exists today. In fact, the term is so overused, and I think intentionally so, that it's almost meaningless. I want to address that problem here.

And I'll state up front that the big data phenomena is not because people are buying more big iron.

During the past year, I've spent an inordinate amount of time researching security in and around big data clusters. It has been a challenge; each time I think I have a handle on one aspect of what constitutes big data, I find an exception that breaks the conceptual model I've created. Every time I think I've quantified a specific attribute or feature, I find another variation of NoSQL that's an exception to the rule. It was even a struggle to just define what big data actually is, with definitions from Wikipedia and other sources missing several essential ingredients: In fact, the definition section of the Wikipedia entry on big data does not really offer a definition at all. All in all, this is one of the most difficult, and interesting, research projects I've been involved with.

I want to share some of the results of that research here because I think it will be helpful in understanding why securing big data is difficult, and how the challenge is not the same as relational platforms many of you are familiar with. In a future post, I'll discuss some of the fundamental differences in how big data systems are deployed and managed from a security perspective, but before I can talk about how to secure "it," I need to define what "it" is.

Yes, big data is about lots of data, of differing types, coming in at velocities that cripple most traditional database systems. But there are other essential characteristics besides size and the need for fast insertion, such as the ability to elastically scale as the data set grows. It's about distributed, parallel processing to tackle massive analysis tasks. It's about data redundancy to provide failure resistant operation, which is critical when computing environments span so many systems that hardware failures are to be expected during the course of operation.

And just as importantly, these systems are hardware-agnostic, accessible from complexity standpoint, extensible, and relatively inexpensive. These characteristics define big data systems.

The poster child for big data is Hadoop, which is a framework that at its core provides data management and query (map-reduce) services across (potentially) thousands of servers. Everything about big data clusters is designed to address storage and processing of multiple terabytes of data across as many systems as needed, in an elastic, expansive way. In fact, these clusters are so large that the prospect or failure increases to the point where it's probable a node will fail. Without elasticity, resiliency, and potential to process requests in more than one location, that makes big data different than the databases that have come before it.

But the reason why big data is a major trend is because of the convergence of three things: huge amounts of data with cheap computing resources and free (or nearly free) analytic tools. Enterprises and midmarket firms are all embracing big data not because they can suddenly afford to invest millions of dollars in data warehouse systems, MPPs, mainframes, or giant systems in-a-box. It's because they can now afford data analysis on massive data sets without spending much money up front. Cheap, commodity, or cloud computing resources with free and easy data management systems like Hadoop make it possible.

If you need to understand what big data is, then consider the characteristics outlined above. They should help you differentiate traditional systems from big data.

Adrian Lane is an analyst/CTO with Securosis LLC, an independent security consulting practice. Special to Dark Reading.Adrian Lane is a Security Strategist and brings over 25 years of industry experience to the Securosis team, much of it at the executive level. Adrian specializes in database security, data security, and secure software development. With experience at Ingres, Oracle, and ... View Full Bio

Adrian from Securosis did a fantastic piece of research on securing Big Data that provides a nice summary of the topic.-á The paper is available for download at http://www.vormetric.com/resou...-á. Enjoy, TT

Published: 2015-03-31The build_index_from_tree function in index.py in Dulwich before 0.9.9 allows remote attackers to execute arbitrary code via a commit with a directory path starting with .git/, which is not properly handled when checking out a working tree.