Why Hadoop? (Part 1)

If you have been a regular reader of just about any technology blog or publication over the last year you’d be hard-pressed to have not heard about big data and especially the excitement (some might argue hype) surrounding Hadoop. Big data is becoming big business, and the buzz around it is building commensurately. What began as a specialized solution to a unique problem faced by the largest of Web 2.0 search engines and social media outlets – namely the need to ingest, store and analyze vast amounts of semi- or unstructured data in a fast, efficient, cost-effective and reliable manner that challenges traditional relational database management and storage approaches – has expanded in scope across nearly every industry vertical and trickled out into a wide variety of IT shops, from small technology startups to large enterprises. Big business has taken note, and major industry players such as IBM, Oracle, EMC, and Cisco have all begun investing directly in this space. But why has Hadoop itself proved so popular, and how has it solved some of the limitations of traditional structured relational database management systems (RDBMS) and associated SAN/NAS storage designs?

In the Part 1 of this blog I’ll start by taking a closer look at some of those problems, and tomorrow in Part 2 I’ll show how Hadoop addresses them.

Businesses of all shapes and sizes are asking complex questions of their data to gain a competitive advantage: retail companies want to be able to track changes in brand sentiment from online sources like Facebook and Twitter and react to them rapidly; financial services firms want to scour large swaths of transaction data to detect fraud patterns; power companies ingest terabytes of data from millions of smart meters generating data every hour in hopes of uncovering new efficiencies in billing and delivery. As a result, developers and data analysts are demanding fast access to as large and “pure” a data set as possible, taxing the limits of traditional software and infrastructure and exposing the following technology challenges:

CPU horsepower/density continues to outpace spinning disk performance. As compute power tracks Moore’s Law and spinning disk capacities continue to advance at a rapid pace, we’re still stuck with relatively low bandwidth from a given spindle – maybe 150 MB/s sequential read throughput out of an enterprise-class SAS 15k RPM drive, or 80 MB/s out of a slower/cheaper/more dense 7.2k RPM SATA disk. A common solution to the bandwidth limit of a single hard disk spindle is to add more spindles and parallelize the read and write operations. SAN and NAS systems became hugely popular by providing far more spindles than a single server could hold internally and relatively fast access to them over a fibrechannel or Ethernet fabric. This separation of compute from storage works well as long as your application doesn’t need to read or write very large amounts of data very quickly, where bottlenecks in the fabric, server, or storage array can arise. However, with big data-scale applications addressing terabyte- and even petabyte-sized working sets, this compute/storage performance imbalance often leaves the compute starved of data in traditional SAN/NAS-based RDBMS architectures.

The explosion of data, especially the unstructured or semi-structured variety, taxes traditional systems’ scalability. Customers are struggling to manage and analyze the massive influx of data from a variety of sources: system and network event logs, application clickstream data, sensor data from robots on the manufacturing floor, and other human-generated and especially machine-generated data sources. In many cases these data are being simply thrown away for lack of an effective means of capturing, storing, and analyzing them. RDBMS’s with their comparatively rigid data models and transactional focus can be cumbersome for developers to adapt to the varying data types and flexible analysis models required of these data sets.

The need to scale out horizontally to overcome the processing and availability limitations of a single or small number of monolithic servers presents complex distributed computing challenges. A given database node can only be so big before it can’t handle the compute needs of the applications trying to access its data, and before it becomes too big to fail. Splitting the compute demands across multiple servers can address both performance and availability concerns, in much the same way that parallelizing I/O across multiple spindles in a RAID set can help aggregate throughput and improve reliability. But all of that distribution and parallelization comes at a cost – writing performant, distributed applications is difficult business and places its own demands on the network, compute, and storage for concurrency, synchronization, locking, and especially failure and recovery. Historically it’s been painful for developers to have to reinvent the distributed application wheel with each new app.

Acknowledging these difficulties with the traditional RDMBS+SAN/NAS model, a new breed of applications and underlying data management frameworks have emerged over the last decade intending to handle the needs of big data sets in a cost-effective and timely manner. Hadoop has become one of the most popular choices for big data problems, as it was purpose-built to address these shortcomings. In Part 2 of this post, I’ll take a closer look at how Hadoop works in this context.

We'd love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

At what limit RDBMS will not be efficient? Does Hadoop have application in telecom field especially 3G and 4G, from the operator point of view and not from end user point of view?
Best regards,
Jihad Daouk

Hello Jihad-
There's no specific limit at which an RDBMS becomes inefficient, as with most technology decisions the answer is "it depends". Hadoop and MapReduce are another tool in the data toolkit alongside the venerable RDBMS, each with its own strengths. Companies should be aware of the new options Hadoop provides and select the right tool for the job. I'm sure there are plenty of telecom use cases for Hadoop, but I'm not tied into that industry well enough to provide specifics.
-Sean

Sean
Excellent post and agree on all count. Do you think traditional RDBMS and SAN/NAS models will eventually have big data system integration by default and perhaps you will see a converged data set systems?
Thanks

Thanks Kapil. There are already examples of vendors providing MapReduce-like capabilities on RDBMS's - I remember seeing least one or two sessions at Oracle OpenWorld on this subject (MapReduce in PL/SQL, etc.). Whether that makes sense to do or not really depends on the data and the application. MapReduce itself isn't that radically new of an idea in computer science, but it's application in Hadoop on top of HDFS with distributed computing designed in from the ground up is innovative. So yes I expect we'll see some level of convergence, and even some converged systems and appliances trying to provide the best of both worlds, and they'll probably make sense for some customers in the same way it may make sense to run Hadoop virtualized or even with non-local storage for a given deployment, even if that's not the most optimal design from a performance standpoint on paper.

Given the emergence of distributed storage technologies such as distributed database and filesystem, and distributed computing technologies such as hadoop implementation. Do you think Cisco data center and storage products need to support them at the infrastructural (NX-OS/IOS level) or it is already there?

Hi Sameer-
Great question. I think there's a lot already there - in fact some of the largest Hadoop clusters in the world run on Cisco networks, and I think we have a strong suite of products for building a complete Hadoop infrastructure stack. By the same token I think there's a lot of opportunity out there in what is still a very young market, and I'm really excited to be in a position to help drive Cisco toward new innovations and solutions (whether in silicon or software or both) that can bring additional value to customers in this space.
-Sean

I am not sure Hadoop in its current form will play a big role in bigdata analytics. Signs of that are in the way it is being evolved to support NAS/SAN, out of JVM (i.e. no code mobility or scale out) and support for structured data.
What is really catching on in industry in IMHO is the value of analytics at all levels Invocation (API, Messaging), Datastore (structured and unstructed) and Execution (mostly moving to VM).

Thanks Vikas, it's definitely a fluid space. Though I'm not quite ready to agree that Hadoop in it's current DAS-oriented form won't be playing a big role in big data analytics - I think it already is and will continue to do so. Although there are efforts to optimize it for more classic SAN/NAS environments, and possibly even virtualizing it, at some point the value prop of Hadoop starts to get lost by trying to shoehorn it into previous-generation IT models. That's not to say there's no value in these efforts, for many customers will want the power of MapReduce running in an operational model they're already comfortable with, but all else equal they'll probably end up paying more at scale for an equivalent amount of analytic horsepower.
Regards,
Sean

Some of the individuals posting to this site, including the moderators, work for Cisco Systems. Opinions expressed here and in any corresponding comments are the personal opinions of the original authors, not of Cisco. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Cisco or any other party. This site is available to the public. No information you consider confidential should be posted to this site. By posting you agree to be solely responsible for the content of all information you contribute, link to, or otherwise upload to the Website and release Cisco from any liability related to your use of the Website. You also grant to Cisco a worldwide, perpetual, irrevocable, royalty-free and fully-paid, transferable (including rights to sublicense) right to exercise all copyright, publicity, and moral rights with respect to any original content you provide. The comments are moderated. Comments will appear as soon as they are approved by the moderator.