Introduction

MapReduce 2, or Next Generation MapReduce, is a long needed upgrade to the way that scheduling, resource management, and execution occur in Hadoop. At their core, the improvements
separate cluster resource management capabilities from MapReduce-specific logic. They enable Hadoop to share resources dynamically between MapReduce and other parallel processing frameworks, such as
Impala, allow more sensible and finer-grained resource configuration for better cluster utilization, and permit it to scale to accommodate more and larger jobs.

This document provides a guide to both the architectural and user-facing changes, so that both cluster operators and MapReduce programmers can easily make the transition.

Terminology and Architecture

MapReduce 1 (MapReduce MRv1) has been split into two components. The cluster resource management capabilities have become YARN (Yet Another Resource Negotiator), while the
MapReduce-specific capabilities remain MapReduce. In the MapReduce MRv1 architecture, the cluster was managed by a service called the JobTracker. TaskTracker services lived on each host and would
launch tasks on behalf of jobs. The JobTracker would serve information about completed jobs.

In MapReduce MRv2, the functions of the JobTracker have been split between three services. The ResourceManager is a persistent YARN service that receives and runs applications (a
MapReduce job is an application) on the cluster. It contains the scheduler, which, as previously, is pluggable. The MapReduce-specific capabilities of the JobTracker have been moved into the
MapReduce ApplicationMaster, one of which is started to manage each MapReduce job and terminated when the job completes. The JobTracker function of serving information about completed jobs has been
moved to the JobHistory Server. The TaskTracker has been replaced with the NodeManager, a YARN service that manages resources and deployment on a host. It is responsible for launching containers,
each of which can house a map or reduce task.

The new architecture has its advantages. First, by breaking up the JobTracker into a few different services, it avoids many of the scaling issues faced by MapReduce in Hadoop 1. More
importantly, it makes it possible to run frameworks other than MapReduce on a Hadoop cluster. For example, Impala can also run on YARN and share resources with MapReduce.

For MapReduce Programmers: Writing and Running Jobs

Nearly all jobs written for MRv1 can run without any modifications on an MRv2 cluster.

Java API Compatibility

MRv2 supports both the old (mapred) and new (mapreduce) MapReduce APIs used for MRv1, with a few caveats. The difference
between the old and new APIs, which concerns user-facing changes, should not be confused with the difference between MRv1 and MRv2, which concerns changes to the underlying framework. CDH 4 and CDH 5
both support the new and old MapReduce APIs.

In general, applications that use @Public/@Stable APIs are binary-compatible from CDH 4, meaning that compiled binaries
should be able to run without modifications on the new framework. Source compatibility may be broken for applications that use a few obscure APIs that are technically public, but rarely needed and
primarily exist for internal use. These APIs are detailed below. Source incompatibility means that code changes are required to compile. It is orthogonal to binary compatibility - binaries for an
application that is binary-compatible, but not source-compatible, continues to run fine on the new framework, but code changes are required to regenerate those binaries.

Binary Incompatibilities

Source Incompatibilities

CDH 4 MRv1 to CDH 5 MRv1

None

None

CDH 4 MRv1 to CDH 5 MRv2

None

Rare

CDH 5 MRv1 to CDH 5 MRv2

None

Rare

The following are the known source incompatibilities:

KeyValueLineRecordReader#getProgress and LineRecordReader#getProgress now throw IOExceptions in both the old and new APIs.
Their superclass method, RecordReader#getProgress, already did this, but source compatibility will be broken for the rare code that used it without a try/catch block.

FileOutputCommitter#abortTask now throws an IOException. Its superclass method always did this, but source compatibility
will be broken for the rare code that used it without a try/catch block. This was fixed in CDH 4.3 MRv1 to be compatible with MRv2.

Job#getDependentJobs, an API marked @Evolving, now returns a List instead of an
ArrayList.

Compiling Jobs Against MRv2

If you are using Maven, compiling against MRv2 requires including the same artifact, hadoop-client. Changing the version to Hadoop 2 version (for example,
using 2.2.0-cdh5.0.0 instead of 2.0.0-mr1-cdh4.3.0) should be enough. If you are not using Maven, compiling against all the Hadoop JARs is recommended. A comprehensive list of Hadoop Maven artifacts
is available at: Using the CDH 5 Maven Repository.

If you want your job to run against both MRv1 and MRv2, compile it against MRv2.

Job Configuration

As in MRv1, job configuration options can be specified on the command line, in Java code, or in the mapred-site.xml on the client machine in the same way
they previously were. The vast majority of job configuration options that were available in MRv1 work in MRv2 as well. For consistency and clarity, many options have been given new names. The older
names are deprecated, but will still work for the time being. The exceptions to this are mapred.child.ulimit and all options relating to JVM reuse, as these are no
longer supported.

Submitting and Monitoring Jobs

The MapReduce command line interface remains entirely compatible. Use of the Hadoop command line tool to run MapReduce related commands (pipes, job, queue,
classpath, historyserver, distcp, archive) is deprecated, but still works. The mapred command line tool is preferred for these commands.

Selecting Appropriate JAR files for Your Jobs

The following table shows the names and locations of the JAR files used in MRv1 and the corresponding names and locations in YARN:

Requesting Resources

A MapReduce job submission includes the amount of resources to reserve for each map and reduce task. As in MapReduce 1, the amount of memory requested is controlled by the mapreduce.map.memory.mb and mapreduce.reduce.memory.mb properties.

MapReduce 2 adds additional parameters that control how much processing power to reserve for each task as well. The mapreduce.map.cpu.vcores and
mapreduce.reduce.cpu.vcores properties express how much parallelism a map or reduce task can take advantage of. These should remain at their default value of 1 unless
your code is explicitly spawning extra compute-intensive threads.

Note:

As of CDH 5.4.0, configuring MapReduce jobs is simpler than before: instead of having to set both the heap size (mapreduce.map.java.opts or mapreduce.reduce.java.opts) and the container size (mapreduce.map.memory.mb or mapreduce.reduce.memory.mb), you can
now choose to set only one of them; the other is inferred from mapreduce.job.heap.memory-mb.ratio. If you do not specify either of them, container size defaults to 1
GiB and the heap size is inferred.

The impact on user jobs is as follows: for jobs that do not set heap size, this increases the JVM size from 200 MB to a default 820 MB. This should be okay for most jobs, but streaming
tasks might need more memory because their Java process takes their total usage over the container size. Even in that case, this would likely happen only for those tasks relying on aggressive GC to
keep the heap under 200 MB.

For Administrators: Configuring and Running MRv2 Clusters

Configuration Migration

Since MapReduce 1 functionality has been split into two components, MapReduce cluster configuration options have been split into YARN configuration options, which go in yarn-site.xml, and MapReduce configuration options, which go in mapred-site.xml. Many have been given new names to reflect the shift. As JobTrackers
and TaskTrackers no longer exist in MRv2, all configuration options pertaining to them no longer exist, although many have corresponding options for the ResourceManager, NodeManager, and
JobHistoryServer.

Resource Configuration

One of the larger changes in MRv2 is the way that resources are managed. In MRv1, each host was configured with a fixed number of map slots and a fixed number of reduce slots. Under
YARN, there is no distinction between resources available for maps and resources available for reduces - all resources are available for both. Second, the notion of slots has been discarded, and
resources are now configured in terms of amounts of memory (in megabytes) and CPU (in “virtual cores”, which are described below). Resource configuration is an inherently difficult topic, and the
added flexibility that YARN provides in this regard also comes with added complexity. Cloudera Manager will pick sensible values automatically, but if you are setting up your cluster manually or just
interested in the details, read on.

Configuring Memory Settings for YARN and MRv2

The memory configuration for YARN and MRv2 memory is important to get the best performance from your cluster. Several different settings are involved. The table below shows the default
settings, as well as the settings that Cloudera recommends, for each configuration option. See Managing
YARN (MRv2) and MapReduce (MRv1) for more configuration specifics; and, for detailed tuning advice with sample configurations, see Tuning YARN.

YARN and MRv2 Memory Configuration

Cloudera Manager Property Name

CDH Property Name

Default Configuration

Cloudera Tuning Guidelines

Container Memory Minimum

yarn.scheduler.
minimum-allocation-mb

1 GB

0

Container Memory Maximum

yarn.scheduler.
maximum-allocation-mb

64 GB

amount of memory on largest node

Container Memory Increment

yarn.scheduler.
increment-allocation-mb

512 MB

Use a fairly large value, such as 128 MB

Container Memory

yarn.nodemanager.
resource.memory-mb

8 GB

8 GB

Map Task Memory

mapreduce.map.memory.mb

1 GB

1 GB

Reduce Task Memory

mapreduce.reduce.memory.mb

1 GB

1 GB

Map Task Java Opts Base

mapreduce.map.java.opts

-Djava.net.preferIPv4Stack=true

-Djava.net.preferIPv4Stack=true -Xmx768m

Reduce Task Java Opts Base

mapreduce.reduce.java.opts

-Djava.net.preferIPv4Stack=true

-Djava.net.preferIPv4Stack=true -Xmx768m

ApplicationMaster Memory

yarn.app.mapreduce.
am.resource.mb

1 GB

1 GB

ApplicationMaster Java Opts Base

yarn.app.mapreduce.
am.command-opts

-Djava.net.preferIPv4Stack=true

-Djava.net.preferIPv4Stack=true -Xmx768m

Resource Requests

From the perspective of a developer requesting resource allocations for a job’s tasks, nothing needs to be changed. Map and reduce task memory requests still work and, additionally,
tasks that will use multiple threads can request more than 1 core with the mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores
properties.

Configuring Host Capacities

In MRv1, the mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum properties dictated how many
map and reduce slots each TaskTracker had. These properties no longer exist in YARN. Instead, YARN uses yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores, which control the amount of memory and CPU on each host, both available to both maps and reduces. If you were using Cloudera Manager to
configure these automatically, Cloudera Manager will take care of it in MRv2 as well. If configuring these manually, simply set these to the amount of memory and number of cores on the machine after
subtracting out resources needed for other services.

Virtual Cores

To better handle varying CPU requests, YARN supports virtual cores (vcores) , a resource meant to express parallelism. The “virtual” in the name is somewhat misleading - on the
NodeManager, vcores should be configured equal to the number of physical cores on the machine. Tasks should be requested with vcores equal to the number of cores they can saturate at once. Currently
vcores are very coarse - tasks will rarely want to ask for more than one of them, but a complementary axis that represents processing power may be added in the future to enable finer-grained resource
configuration.

Rounding Request Sizes

Also noteworthy are the yarn.scheduler.minimum-allocation-mb, yarn.scheduler.minimum-allocation-vcores, yarn.scheduler.increment-allocation-mb, and yarn.scheduler.increment-allocation-vcores properties, which default to 1024, 1, 512, and 1 respectively.
If tasks are submitted with resource requests lower than the minimum-allocation values, their requests will be set to these values. If tasks are submitted with resource requests that are not
multiples of the increment-allocation values, their requests will be rounded up to the nearest increments.

To make all of this more concrete, let’s use an example. Each host in the cluster has 24 GB of memory and 6 cores. Other services running on the nodes require 4 GB and 1 core, so we set
yarn.nodemanager.resource.memory-mb to 20480 and yarn.nodemanager.resource.cpu-vcores to 5. If you leave the map and reduce task defaults
of 1024 MB and 1 virtual core intact, you will have at most 5 tasks running at the same time. If you want each of your tasks to use 5 GB, set their mapreduce.(map|reduce).memory.mb to 5120, which would limit you to 4 tasks running at the same time.

Scheduler Configuration

Cloudera recommends using the Fair Scheduler in MRv2. (FIFO and Capacity Scheduler are also available.) Fair Scheduler allocation files require changes in light of the new way that
resources work. The minMaps, maxMaps, minReduces, and maxReduces queue properties have been replaced with a minResources property and a maxProperties. Instead of taking a number of slots, these properties take a value like “1024 MB, 3 vcores”. By default,
the MRv2 Fair Scheduler will attempt to equalize memory allocations in the same way it attempted to equalize slot allocations in MRv1. The MRv2 Fair Scheduler contains a number of new features
including hierarchical queues and fairness based on multiple resources.

Administration Commands

The jobtracker and tasktracker commands, which start the JobTracker and TaskTracker, are no longer supported because these
services no longer exist. They are replaced with yarn resourcemanager and yarn nodemanager, which start the ResourceManager and
NodeManager respectively. hadoop mradmin is no longer supported. Instead, yarn rmadmin should be used. The new admin commands mimic the
functionality of the MRv1 names, allowing nodes, queues, and ACLs to be refreshed while the ResourceManager is running.

Security

The following section outlines the additional changes needed to migrate a secure cluster.

New YARN Kerberos service principals should be created for the ResourceManager and NodeManager, using the pattern used for other Hadoop services, that is, yarn@HOST. The mapred principal should still be used for the JobHistory Server. If you are using Cloudera Manager to configure security, this will be
taken care of automatically.

As in MRv1, a configuration must be set to have the user that submits a job own its task processes. The equivalent of the MRv1 LinuxTaskController is the LinuxContainerExecutor. In a
secure setup, NodeManager configurations should set yarn.nodemanager.container-executor.class to org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor. Properties set in the taskcontroller.cfg configuration file should be migrated to
their analogous properties in the container-executor.cfg file.

In secure setups, configuring hadoop-policy.xml allows administrators to set up access control lists on internal protocols. The following is a table of
MRv1 options and their MRv2 equivalents:

MRv1

MRv2

Comment

security.task.umbilical.protocol.acl

security.job.task.protocol.acl

As in MRv1, this should never be set to anything other than *

security.inter.tracker.protocol.acl

security.resourcetracker.protocol.acl

security.job.submission.protocol.acl

security.applicationclient.protocol.acl

security.admin.operations.protocol.acl

security.resourcemanager-administration.protocol.acl

security.applicationmaster.protocol.acl

No MRv1 equivalent

security.containermanagement.protocol.acl

No MRv1 equivalent

security.resourcelocalizer.protocol.acl

No MRv1 equivalent

security.job.client.protocol.acl

No MRv1 equivalent

Queue access control lists (ACLs) are now placed in the Fair Scheduler configuration file instead of the JobTracker configuration. A list of users and groups that can submit jobs to a
queue can be placed in aclSubmitApps in the queue’s configuration. The queue administration ACL is no longer supported, but will be in a future release.

Ports

The following is a list of default ports used by MRv2 and YARN, as well as the configuration properties used to configure them.

Port

Use

Property

8032

ResourceManager Client RPC

yarn.resourcemanager.address

8030

ResourceManager Scheduler RPC (for ApplicationMasters)

yarn.resourcemanager.scheduler.address

8033

ResourceManager Admin RPC

yarn.resourcemanager.admin.address

8088

ResourceManager Web UI and REST APIs

yarn.resourcemanager.webapp.address

8031

ResourceManager Resource Tracker RPC (for NodeManagers)

yarn.resourcemanager.resource-tracker.address

8040

NodeManager Localizer RPC

yarn.nodemanager.localizer.address

8042

NodeManager Web UI and REST APIs

yarn.nodemanager.webapp.address

10020

Job History RPC

mapreduce.jobhistory.address

19888

Job History Web UI and REST APIs

mapreduce.jobhistory.webapp.address

13562

Shuffle HTTP

mapreduce.shuffle.port

Note: You can set yarn.resourcemanager.hostname.id for each ResourceManager instead of setting the ResourceManager
values; this will cause YARN to use the default ports on those hosts.

High Availability

YARN supports ResourceManager HA to make a YARN cluster highly-available; the underlying architecture of active-standby pair is similar to JobTracker HA in MRv1. A major improvement
over MRv1 is: in YARN, the completed tasks of in-flight MapReduce jobs are not re-run on recovery after the ResourceManager is restarted or failed over. Further, the configuration and setup has also
been simplified. The main differences are:

Failover controller has been moved from a separate ZKFC daemon to be a part of the ResourceManager itself. So, there is no need to run an additional daemon.

Clients, applications, and NodeManagers do not require configuring a proxy-provider to talk to the active ResourceManager.

Below is a table with HA-related configurations used in MRv1 and their equivalents in YARN:

Look at all the service configurations placed in mapred-site.xml and replace them with their corresponding YARN configuration. Configurations starting
with yarn should be placed inside yarn-site.xml, not mapred-site.xml. Refer to Resource Configuration for best practices on how to convert TaskTracker slot capacities (mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum) to NodeManager resource capacities (yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores), as well as how to convert configurations in the Fair Scheduler
allocations file, fair-scheduler.xml.

Start the ResourceManager, NodeManagers, and the JobHistoryServer.

Web UI

In MRv1, the JobTracker Web UI served detailed information about the state of the cluster and the jobs (recent and current) running on it. It also contained the job history page, which
served information from disk about older jobs.

The MRv2 Web UI provides the same information structured in the same way, but has been revamped with a new look and feel. The ResourceManager’s UI, which includes information about
running applications and the state of the cluster, is now located by default at <ResourceManager host>:8088. The JobHistory UI is now located by default at <JobHistoryServer host>:19888.
Jobs can be searched and viewed there just as they could in MRv1.

Because the ResourceManager is meant to be agnostic to many of the concepts in MapReduce, it cannot host job information directly. Instead, it proxies to a Web UI that can. If the job is
running, this proxy is the relevant MapReduce ApplicationMaster; if the job has completed, then this proxy is the JobHistoryServer. Thus, the user experience is similar to that of MRv1, but the
information is now coming from different places.

Summary of Configuration Changes

The following tables summarize the changes in configuration parameters between MRv1 and MRv2.

If this documentation includes code, including but not limited to, code examples, Cloudera makes this available to you under the terms of the Apache License, Version 2.0, including any required
notices. A copy of the Apache License Version 2.0 can be found here.