Search This Blog

Saturday, March 24, 2018

They're sometimes viewed as competitors in the big-data space, but the growing consensus is that they're better together

Listen in on any conversation about big data, and you'll probably hear mention of Hadoop or Apache Spark. Here's a brief look at what they do and how they compare.

1: They do different things. Hadoop and Apache Spark are both big-data frameworks, but they don't really serve the same purposes. Hadoop is essentially a distributed data infrastructure: It distributes massive data collections across multiple nodes within a cluster of commodity servers, which means you don't need to buy and maintain expensive custom hardware. It also indexes and keeps track of that data, enabling big-data processing and analytics far more effectively than was possible previously. Spark, on the other hand, is a data-processing tool that operates on those distributed data collections; it doesn't do distributed storage.

2: You can use one without the other. Hadoop includes not just a storage component, known as the Hadoop Distributed File System, but also a processing component called MapReduce, so you don't need Spark to get your processing done. Conversely, you can also use Spark without Hadoop. Spark does not come with its own file management system, though, so it needs to be integrated with one -- if not HDFS, then another cloud-based data platform. Spark was designed for Hadoop, however, so many agree they're better together.

3: Spark is speedier. Spark is generally a lot faster than MapReduce because of the way it processes data. While MapReduce operates in steps, Spark operates on the whole data set in one fell swoop. "The MapReduce workflow looks like this: read data from the cluster, perform an operation, write results to the cluster, read updated data from the cluster, perform next operation, write next results to the cluster, etc.," explained Kirk Borne, principal data scientist at Booz Allen Hamilton. Spark, on the other hand, completes the full data analytics operations in-memory and in near real-time: "Read data from the cluster, perform all of the requisite analytic operations, write results to the cluster, done," Borne said. Spark can be as much as 10 times faster than MapReduce for batch processing and up to 100 times faster for in-memory analytics, he said.

4: You may not need Spark's speed. MapReduce's processing style can be just fine if your data operations and reporting requirements are mostly static and you can wait for batch-mode processing. But if you need to do analytics on streaming data, like from sensors on a factory floor, or have applications that require multiple operations, you probably want to go with Spark. Most machine-learning algorithms, for example, require multiple operations. Common applications for Spark include real-time marketing campaigns, online product recommendations, cybersecurity analytics and machine log monitoring.

5: Failure recovery: different, but still good. Hadoop is naturally resilient to system faults or failures since data are written to disk after every operation, but Spark has similar built-in resiliency by virtue of the fact that its data objects are stored in something called resilient distributed datasets distributed across the data cluster. "These data objects can be stored in memory or on disks, and RDD provides full recovery from faults or failures," Borne pointed out.

Build and Install Hadoop 2.x or newer on Windows

1. Introduction

Hadoop version 2.2 onwards includes native support for Windows. The official Apache Hadoop releases do not include Windows binaries (yet, as of January 2014). However building a Windows package from the sources is fairly straightforward.

Hadoop is a complex system with many components. Some familiarity at a high level is helpful before attempting to build or install it or the first time. Familiarity with Java is necessary in case you need to troubleshoot.

2. Building Hadoop Core for Windows

2.1. Choose target OS version

The Hadoop developers have usedWindows Server 2008andWindows Server 2008 R2during development and testing.Windows VistaandWindows 7are also likely to work because of the Win32 API similarities with the respective server SKUs. We havenottested on Windows XP or any earlier versions of Windows and these are not likely to work. Any issues reported on Windows XP or earlier will be closed asInvalid.

Do notattempt to run the installation from withinCygwin. Cygwin is neither required nor supported.

2.2. Choose Java Version and set JAVA_HOME

Oracle JDK versions1.7and1.6have been tested by the Hadoop developers and are known to work.

Make sure thatJAVA_HOMEis set in your environment and does not contain any spaces. If your default Java installation directory has spaces then you must use theWindows 8.3 Pathnameinstead e.g.c:\Progra~1\Java\...instead ofc:\Program Files\Java\....

2.3. Getting Hadoop sources

The current stable release as of August 2014 is 2.5. The source distribution can be retrieved from the ASF download server or using subversion or git.

Git repository URL:git://git.apache.org/hadoop-common.git. After downloading the sources via git, switch to the stable 2.5 usinggit checkout branch-2.5, or use the appropriate branch name if you are targeting a newer version.

2.4. Installing Dependencies and Setting up Environment for Building

TheBUILDING.txtfile in the root of the source tree has detailed information on the list of requirements and how to install them. It also includes information on setting up the environment and a few quirks that are specific to Windows. It is strongly recommended that you read and understand it before proceeding.

2.5. A few words on Native IO support

Hadoop on Linux includes optional Native IO support. However Native IO is mandatory on Windows and without it you will not be able to get your installation working. You must follow all the instructions from BUILDING.txt to ensure that Native IO support is built correctly.

2.6. Build and Copy the Package files

To build a binary distribution run the following command from the root of the source tree.

mvn package -Pdist,native-win -DskipTests -Dtar

Note that this command must be run from aWindows SDK command promptas documented in BUILDING.txt. A successful build generates a binary hadoop.tar.gzpackage inhadoop-dist\target\.

The Hadoop version is present in the package file name. If you are targeting a different version then the package name will be different.

2.7. Installation

Pick a target directory for installing the package. We usec:\deployas an example. Extract the tar.gz file (e.g.hadoop-2.5.0.tar.gz) underc:\deploy. This will yield a directory structure like the following. If installing a multi-node cluster, then repeat this step on every node.

3. Starting a Single Node (pseudo-distributed) Cluster

This section describes the absolute minimum configuration required to start a Single Node (pseudo-distributed) cluster and also run an exampleMapReducejob.

3.1. Example HDFS Configuration

Before you can start the Hadoop Daemons you will need to make a few edits to configuration files. The configuration file templates will all be found inc:\deploy\etc\hadoop, assuming your installation directory isc:\deploy.

First edit the filehadoop-env.cmdto add the following lines near the end of the file.

set HADOOP_PREFIX=c:\deploy set HADOOP_CONF_DIR=%HADOOP_PREFIX%\etc\hadoop set YARN_CONF_DIR=%HADOOP_CONF_DIR% set PATH=%PATH%;%HADOOP_PREFIX%\bin

Edit or create the filecore-site.xmland make sure it has the following configuration key:

4. Multi-Node cluster

TODO: Document this

5. Conclusion

5.1. Caveats

The following features are yet to be implemented for Windows.

Hadoop Security

Short-circuit reads

5.2. Questions?

If you have any questions you can request help from theHadoop mailing lists. For help with building Hadoop on Windows, send mail tocommon-dev@hadoop.apache.org. For all other questions send email touser@hadoop.apache.org. Subscribe/unsubscribe information is included on the linked webpage. Please note that the mailing lists are monitored by volunteers.