Administrators should use the conf/hadoop-env.sh and conf/yarn-env.sh script to do site-specific customization of the Hadoop daemons' process environment.

At the very least you should specify the JAVA_HOME so that it is correctly defined on each remote node.

In most cases you should also specify HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR to point to directories that can only be written to by the users that are going to run the hadoop daemons. Otherwise there is the potential for a symlink attack.

Administrators can configure individual daemons using the configuration options shown below in the table:

Daemon

Environment Variable

NameNode

HADOOP_NAMENODE_OPTS

DataNode

HADOOP_DATANODE_OPTS

Secondary NameNode

HADOOP_SECONDARYNAMENODE_OPTS

ResourceManager

YARN_RESOURCEMANAGER_OPTS

NodeManager

YARN_NODEMANAGER_OPTS

WebAppProxy

YARN_PROXYSERVER_OPTS

Map Reduce Job History Server

HADOOP_JOB_HISTORYSERVER_OPTS

For example, To configure Namenode to use parallelGC, the following statement should be added in hadoop-env.sh :

HADOOP_LOG_DIR / YARN_LOG_DIR - The directory where the daemons' log files are stored. They are automatically created if they don't exist.

HADOOP_HEAPSIZE / YARN_HEAPSIZE - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you want to configure the values separately for each deamon you can use.

This section deals with important parameters to be specified in the given configuration files:

conf/core-site.xml

Parameter

Value

Notes

fs.defaultFS

NameNode URI

hdfs://host:port/

io.file.buffer.size

131072

Size of read/write buffer used in SequenceFiles.

conf/hdfs-site.xml

Configurations for NameNode:

Parameter

Value

Notes

dfs.namenode.name.dir

Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.

If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.

dfs.namenode.hosts / dfs.namenode.hosts.exclude

List of permitted/excluded DataNodes.

If necessary, use these files to control the list of allowable datanodes.

dfs.blocksize

268435456

HDFS blocksize of 256MB for large file-systems.

dfs.namenode.handler.count

100

More NameNode server threads to handle RPCs from large number of DataNodes.

Configurations for DataNode:

Parameter

Value

Notes

dfs.datanode.data.dir

Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.

If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.

conf/yarn-site.xml

Configurations for ResourceManager and NodeManager:

Parameter

Value

Notes

yarn.acl.enable

true / false

Enable ACLs? Defaults to false.

yarn.admin.acl

Admin ACL

ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access.

yarn.log-aggregation-enable

false

Configuration to enable or disable log aggregation

Configurations for ResourceManager:

Parameter

Value

Notes

yarn.resourcemanager.address

ResourceManager host:port for clients to submit jobs.

host:port

yarn.resourcemanager.scheduler.address

ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.

If necessary, use these files to control the list of allowable NodeManagers.

Configurations for NodeManager:

Parameter

Value

Notes

yarn.nodemanager.resource.memory-mb

Resource i.e. available physical memory, in MB, for given NodeManager

Defines total available resources on the NodeManager to be made available to running containers

yarn.nodemanager.vmem-pmem-ratio

Maximum ratio by which virtual memory usage of tasks may exceed physical memory

The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.

yarn.nodemanager.local-dirs

Comma-separated list of paths on the local filesystem where intermediate data is written.

Multiple paths help spread disk i/o.

yarn.nodemanager.log-dirs

Comma-separated list of paths on the local filesystem where logs are written.

Multiple paths help spread disk i/o.

yarn.nodemanager.log.retain-seconds

10800

Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.

yarn.nodemanager.remote-app-log-dir

/logs

HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.

yarn.nodemanager.remote-app-log-dir-suffix

logs

Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.

yarn.nodemanager.aux-services

mapreduce_shuffle

Shuffle service that needs to be set for Map Reduce applications.

Configurations for History Server (Needs to be moved elsewhere):

Parameter

Value

Notes

yarn.log-aggregation.retain-seconds

-1

How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node.

yarn.log-aggregation.retain-check-interval-seconds

-1

Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node.

conf/mapred-site.xml

Configurations for MapReduce Applications:

Parameter

Value

Notes

mapreduce.framework.name

yarn

Execution framework set to Hadoop YARN.

mapreduce.map.memory.mb

1536

Larger resource limit for maps.

mapreduce.map.java.opts

-Xmx1024M

Larger heap-size for child jvms of maps.

mapreduce.reduce.memory.mb

3072

Larger resource limit for reduces.

mapreduce.reduce.java.opts

-Xmx2560M

Larger heap-size for child jvms of reduces.

mapreduce.task.io.sort.mb

512

Higher memory-limit while sorting data for efficiency.

mapreduce.task.io.sort.factor

100

More streams merged at once while sorting files.

mapreduce.reduce.shuffle.parallelcopies

50

Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.

Configurations for MapReduce JobHistory Server:

Parameter

Value

Notes

mapreduce.jobhistory.address

MapReduce JobHistory Server host:port

Default port is 10020.

mapreduce.jobhistory.webapp.address

MapReduce JobHistory Server Web UI host:port

Default port is 19888.

mapreduce.jobhistory.intermediate-done-dir

/mr-history/tmp

Directory where history files are written by MapReduce jobs.

mapreduce.jobhistory.done-dir

/mr-history/done

Directory where history files are managed by the MR JobHistory Server.

The NameNode and the ResourceManager obtains the rack information of the slaves in the cluster by invoking an API resolve in an administrator configured module.

The API resolves the DNS name (also IP address) to a rack id.

The site-specific module to use can be configured using the configuration item topology.node.switch.mapping.impl. The default implementation of the same runs a script/command configured using topology.script.file.name. If topology.script.file.name is not set, the rack id /default-rack is returned for any passed IP address.

Hadoop provides a mechanism by which administrators can configure the NodeManager to run an administrator supplied script periodically to determine if a node is healthy or not.

Administrators can determine if the node is in a healthy state by performing any checks of their choice in the script. If the script detects the node to be in an unhealthy state, it must print a line to standard output beginning with the string ERROR. The NodeManager spawns the script periodically and checks its output. If the script's output contains the string ERROR, as described above, the node's status is reported as unhealthy and the node is black-listed by the ResourceManager. No further tasks will be assigned to this node. However, the NodeManager continues to run the script, so that if the node becomes healthy again, it will be removed from the blacklisted nodes on the ResourceManager automatically. The node's health along with the output of the script, if it is unhealthy, is available to the administrator in the ResourceManager web interface. The time since the node was healthy is also displayed on the web interface.

The following parameters can be used to control the node health monitoring script in conf/yarn-site.xml.

Parameter

Value

Notes

yarn.nodemanager.health-checker.script.path

Node health script

Script to check for node's health status.

yarn.nodemanager.health-checker.script.opts

Node health script options

Options for script to check for node's health status.

yarn.nodemanager.health-checker.script.interval-ms

Node health script interval

Time interval for running health script.

yarn.nodemanager.health-checker.script.timeout-ms

Node health script timeout interval

Timeout for health script execution.

The health checker script is not supposed to give ERROR if only some of the local disks become bad. NodeManager has the ability to periodically check the health of the local disks (specifically checks nodemanager-local-dirs and nodemanager-log-dirs) and after reaching the threshold of number of bad directories based on the value set for the config property yarn.nodemanager.disk-health-checker.min-healthy-disks, the whole node is marked unhealthy and this info is sent to resource manager also. The boot disk is either raided or a failure in the boot disk is identified by the health checker script.

Typically you choose one machine in the cluster to act as the NameNode and one machine as to act as the ResourceManager, exclusively. The rest of the machines act as both a DataNode and NodeManager and are referred to as slaves.

List all slave hostnames or IP addresses in your conf/slaves file, one per line.