Factory to create client IPC classes.yarn.ipc.client.factory.classFactory to create server IPC classes.yarn.ipc.server.factory.classFactory to create serializeable records.yarn.ipc.record.factory.classRPC class implementationyarn.ipc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCThe hostname of the RM.yarn.resourcemanager.hostname0.0.0.0The address of the applications manager interface in the RM.yarn.resourcemanager.address${yarn.resourcemanager.hostname}:8032
The actual address the server will bind to. If this optional address is
set, the RPC and webapp servers will bind to this address and the port specified in
yarn.resourcemanager.address and yarn.resourcemanager.webapp.address, respectively. This
is most useful for making RM listen to all interfaces by setting to 0.0.0.0.
yarn.resourcemanager.bind-host
If set to true, then ALL container updates will be automatically sent to
the NM in the next heartbeatyarn.resourcemanager.auto-update.containersfalseThe number of threads used to handle applications manager requests.yarn.resourcemanager.client.thread-count50Number of threads used to launch/cleanup AM.yarn.resourcemanager.amlauncher.thread-count50Retry times to connect with NM.yarn.resourcemanager.nodemanager-connect-retries10Timeout in milliseconds when YARN dispatcher tries to drain the
events. Typically, this happens when service is stopping. e.g. RM drains
the ATS events dispatcher when stopping.
yarn.dispatcher.drain-events.timeout300000The expiry interval for application master reporting.yarn.am.liveness-monitor.expiry-interval-ms600000The Kerberos principal for the resource manager.yarn.resourcemanager.principalThe address of the scheduler interface.yarn.resourcemanager.scheduler.address${yarn.resourcemanager.hostname}:8030Number of threads to handle scheduler interface.yarn.resourcemanager.scheduler.client.thread-count50
Comma separated class names of ApplicationMasterServiceProcessor
implementations. The processors will be applied in the order
they are specified.
yarn.resourcemanager.application-master-service.processors
This configures the HTTP endpoint for YARN Daemons.The following
values are supported:
- HTTP_ONLY : Service is provided only on http
- HTTPS_ONLY : Service is provided only on https
yarn.http.policyHTTP_ONLY
The http address of the RM web application.
If only a host is provided as the value,
the webapp will be served on a random port.
yarn.resourcemanager.webapp.address${yarn.resourcemanager.hostname}:8088
The https address of the RM web application.
If only a host is provided as the value,
the webapp will be served on a random port.
yarn.resourcemanager.webapp.https.address${yarn.resourcemanager.hostname}:8090
The Kerberos keytab file to be used for spnego filter for the RM web
interface.
yarn.resourcemanager.webapp.spnego-keytab-file
The Kerberos principal to be used for spnego filter for the RM web
interface.
yarn.resourcemanager.webapp.spnego-principal
Add button to kill application in the RM Application view.
yarn.resourcemanager.webapp.ui-actions.enabledtrueTo enable RM web ui2 application.yarn.webapp.ui2.enablefalse
Explicitly provide WAR file path for ui2 if needed.
yarn.webapp.ui2.war-file-pathyarn.resourcemanager.resource-tracker.address${yarn.resourcemanager.hostname}:8031Are acls enabled.yarn.acl.enablefalseAre reservation acls enabled.yarn.acl.reservation-enablefalseACL of who can be admin of the YARN cluster.yarn.admin.acl*The address of the RM admin interface.yarn.resourcemanager.admin.address${yarn.resourcemanager.hostname}:8033Number of threads used to handle RM admin interface.yarn.resourcemanager.admin.client.thread-count1Maximum time to wait to establish connection to
ResourceManager.yarn.resourcemanager.connect.max-wait.ms900000How often to try connecting to the
ResourceManager.yarn.resourcemanager.connect.retry-interval.ms30000The maximum number of application attempts. It's a global
setting for all application masters. Each application master can specify
its individual maximum number of application attempts via the API, but the
individual number cannot be more than the global upper bound. If it is,
the resourcemanager will override it. The default number is set to 2, to
allow at least one retry for AM.yarn.resourcemanager.am.max-attempts2How often to check that containers are still alive. yarn.resourcemanager.container.liveness-monitor.interval-ms600000The keytab for the resource manager.yarn.resourcemanager.keytab/etc/krb5.keytabFlag to enable override of the default kerberos authentication
filter with the RM authentication filter to allow authentication using
delegation tokens(fallback to kerberos if the tokens are missing). Only
applicable when the http authentication type is kerberos.yarn.resourcemanager.webapp.delegation-token-auth-filter.enabledtrueFlag to enable cross-origin (CORS) support in the RM. This flag
requires the CORS filter initializer to be added to the filter initializers
list in core-site.xml.yarn.resourcemanager.webapp.cross-origin.enabledfalseHow long to wait until a node manager is considered dead.yarn.nm.liveness-monitor.expiry-interval-ms600000Path to file with nodes to include.yarn.resourcemanager.nodes.include-pathPath to file with nodes to exclude.yarn.resourcemanager.nodes.exclude-pathThe expiry interval for node IP caching. -1 disables the cachingyarn.resourcemanager.node-ip-cache.expiry-interval-secs-1Number of threads to handle resource tracker calls.yarn.resourcemanager.resource-tracker.client.thread-count50The class to use as the resource scheduler.yarn.resourcemanager.scheduler.classorg.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerThe minimum allocation for every container request at the RM
in MBs. Memory requests lower than this will be set to the value of this
property. Additionally, a node manager that is configured to have less memory
than this value will be shut down by the resource manager.yarn.scheduler.minimum-allocation-mb1024The maximum allocation for every container request at the RM
in MBs. Memory requests higher than this will throw an
InvalidResourceRequestException.yarn.scheduler.maximum-allocation-mb8192The minimum allocation for every container request at the RM
in terms of virtual CPU cores. Requests lower than this will be set to the
value of this property. Additionally, a node manager that is configured to
have fewer virtual cores than this value will be shut down by the resource
manager.yarn.scheduler.minimum-allocation-vcores1The maximum allocation for every container request at the RM
in terms of virtual CPU cores. Requests higher than this will throw an
InvalidResourceRequestException.yarn.scheduler.maximum-allocation-vcores4
Used by node labels. If set to true, the port should be included in the
node name. Only usable if your scheduler supports node labels.
yarn.scheduler.include-port-in-node-namefalseEnable RM to recover state after starting. If true, then
yarn.resourcemanager.store.class must be specified. yarn.resourcemanager.recovery.enabledfalseShould RM fail fast if it encounters any errors. By defalt, it
points to ${yarn.fail-fast}. Errors include:
1) exceptions when state-store write/read operations fails.
yarn.resourcemanager.fail-fast${yarn.fail-fast}Should YARN fail fast if it encounters any errors.
This is a global config for all other components including RM,NM etc.
If no value is set for component-specific config (e.g yarn.resourcemanager.fail-fast),
this value will be the default.
yarn.fail-fastfalseEnable RM work preserving recovery. This configuration is private
to YARN for experimenting the feature.
yarn.resourcemanager.work-preserving-recovery.enabledtrueSet the amount of time RM waits before allocating new
containers on work-preserving-recovery. Such wait period gives RM a chance
to settle down resyncing with NMs in the cluster on recovery, before assigning
new containers to applications.
yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms10000The class to use as the persistent store.
If org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
is used, the store is implicitly fenced; meaning a single ResourceManager
is able to use the store at any point in time. More details on this
implicit fencing, along with setting up appropriate ACLs is discussed
under yarn.resourcemanager.zk-state-store.root-node.acl.
yarn.resourcemanager.store.classorg.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStoreWhen automatic failover is enabled, number of zookeeper
operation retry times in ActiveStandbyElectoryarn.resourcemanager.ha.failover-controller.active-standby-elector.zk.retriesThe maximum number of completed applications RM state
store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}.
By default, it equals to ${yarn.resourcemanager.max-completed-applications}.
This ensures that the applications kept in the state store are consistent with
the applications remembered in RM memory.
Any values larger than ${yarn.resourcemanager.max-completed-applications} will
be reset to ${yarn.resourcemanager.max-completed-applications}.
Note that this value impacts the RM recovery performance. Typically,
a smaller value indicates better performance on RM recovery.
yarn.resourcemanager.state-store.max-completed-applications${yarn.resourcemanager.max-completed-applications}Full path of the ZooKeeper znode where RM state will be
stored. This must be supplied when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
as the value for yarn.resourcemanager.store.classyarn.resourcemanager.zk-state-store.parent-path/rmstore
ACLs to be used for the root znode when using ZKRMStateStore in an HA
scenario for fencing.
ZKRMStateStore supports implicit fencing to allow a single
ResourceManager write-access to the store. For fencing, the
ResourceManagers in the cluster share read-write-admin privileges on the
root node, but the Active ResourceManager claims exclusive create-delete
permissions.
By default, when this property is not set, we use the ACLs from
yarn.resourcemanager.zk-acl for shared admin access and
rm-address:random-number for username-based exclusive create-delete
access.
This property allows users to set ACLs of their choice instead of using
the default mechanism. For fencing to work, the ACLs should be
carefully set differently on each ResourceManger such that all the
ResourceManagers have shared admin access and the Active ResourceManger
takes over (exclusively) the create-delete access.
yarn.resourcemanager.zk-state-store.root-node.aclURI pointing to the location of the FileSystem path where
RM state will be stored. This must be supplied when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
as the value for yarn.resourcemanager.store.classyarn.resourcemanager.fs.state-store.uri${hadoop.tmp.dir}/yarn/system/rmstorethe number of retries to recover from IOException in
FileSystemRMStateStore.
yarn.resourcemanager.fs.state-store.num-retries0Retry interval in milliseconds in FileSystemRMStateStore.
yarn.resourcemanager.fs.state-store.retry-interval-ms1000Local path where the RM state will be stored when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore
as the value for yarn.resourcemanager.store.classyarn.resourcemanager.leveldb-state-store.path${hadoop.tmp.dir}/yarn/system/rmstoreThe time in seconds between full compactions of the leveldb
database. Setting the interval to zero disables the full compaction
cycles.yarn.resourcemanager.leveldb-state-store.compaction-interval-secs3600Enable RM high-availability. When enabled,
(1) The RM starts in the Standby mode by default, and transitions to
the Active mode when prompted to.
(2) The nodes in the RM ensemble are listed in
yarn.resourcemanager.ha.rm-ids
(3) The id of each RM either comes from yarn.resourcemanager.ha.id
if yarn.resourcemanager.ha.id is explicitly specified or can be
figured out by matching yarn.resourcemanager.address.{id} with local address
(4) The actual physical addresses come from the configs of the pattern
- {rpc-config}.{id}yarn.resourcemanager.ha.enabledfalseEnable automatic failover.
By default, it is enabled only when HA is enabledyarn.resourcemanager.ha.automatic-failover.enabledtrueEnable embedded automatic failover.
By default, it is enabled only when HA is enabled.
The embedded elector relies on the RM state store to handle fencing,
and is primarily intended to be used in conjunction with ZKRMStateStore.
yarn.resourcemanager.ha.automatic-failover.embeddedtrueThe base znode path to use for storing leader information,
when using ZooKeeper based leader election.yarn.resourcemanager.ha.automatic-failover.zk-base-path/yarn-leader-electionIndex at which last section of application id (with each section
separated by _ in application id) will be split so that application znode
stored in zookeeper RM state store will be stored as two different znodes
(parent-child). Split is done from the end.
For instance, with no split, appid znode will be of the form
application_1352994193343_0001. If the value of this config is 1, the
appid znode will be broken into two parts application_1352994193343_000
and 1 respectively with former being the parent node.
application_1352994193343_0002 will then be stored as 2 under the parent
node application_1352994193343_000. This config can take values from 0 to 4.
0 means there will be no split. If configuration value is outside this
range, it will be treated as config value of 0(i.e. no split). A value
larger than 0 (up to 4) should be configured if you are storing a large number
of apps in ZK based RM state store and state store operations are failing due to
LenError in Zookeeper.yarn.resourcemanager.zk-appid-node.split-index0Index at which the RM Delegation Token ids will be split so
that the delegation token znodes stored in the zookeeper RM state store
will be stored as two different znodes (parent-child). The split is done
from the end. For instance, with no split, a delegation token znode will
be of the form RMDelegationToken_123456789. If the value of this config is
1, the delegation token znode will be broken into two parts:
RMDelegationToken_12345678 and 9 respectively with former being the parent
node. This config can take values from 0 to 4. 0 means there will be no
split. If the value is outside this range, it will be treated as 0 (i.e.
no split). A value larger than 0 (up to 4) should be configured if you are
running a large number of applications, with long-lived delegation tokens
and state store operations (e.g. failover) are failing due to LenError in
Zookeeper.yarn.resourcemanager.zk-delegation-token-node.split-index0Specifies the maximum size of the data that can be stored
in a znode. Value should be same or less than jute.maxbuffer configured
in zookeeper. Default value configured is 1MB.yarn.resourcemanager.zk-max-znode-size.bytes1048576Name of the cluster. In a HA setting,
this is used to ensure the RM participates in leader
election for this cluster and ensures it does not affect
other clustersyarn.resourcemanager.cluster-idThe list of RM nodes in the cluster when HA is
enabled. See description of yarn.resourcemanager.ha
.enabled for full details on how this is used.yarn.resourcemanager.ha.rm-idsThe id (string) of the current RM. When HA is enabled, this
is an optional config. The id of current RM can be set by explicitly
specifying yarn.resourcemanager.ha.id or figured out by matching
yarn.resourcemanager.address.{id} with local address
See description of yarn.resourcemanager.ha.enabled
for full details on how this is used.yarn.resourcemanager.ha.idWhen HA is enabled, the class to be used by Clients, AMs and
NMs to failover to the Active RM. It should extend
org.apache.hadoop.yarn.client.RMFailoverProxyProvideryarn.client.failover-proxy-providerorg.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProviderWhen HA is enabled, the max number of times
FailoverProxyProvider should attempt failover. When set,
this overrides the yarn.resourcemanager.connect.max-wait.ms. When
not set, this is inferred from
yarn.resourcemanager.connect.max-wait.ms.yarn.client.failover-max-attemptsWhen HA is enabled, the sleep base (in milliseconds) to be
used for calculating the exponential delay between failovers. When set,
this overrides the yarn.resourcemanager.connect.* settings. When
not set, yarn.resourcemanager.connect.retry-interval.ms is used instead.
yarn.client.failover-sleep-base-msWhen HA is enabled, the maximum sleep time (in milliseconds)
between failovers. When set, this overrides the
yarn.resourcemanager.connect.* settings. When not set,
yarn.resourcemanager.connect.retry-interval.ms is used instead.yarn.client.failover-sleep-max-msWhen HA is enabled, the number of retries per
attempt to connect to a ResourceManager. In other words,
it is the ipc.client.connect.max.retries to be used during
failover attemptsyarn.client.failover-retries0When HA is enabled, the number of retries per
attempt to connect to a ResourceManager on socket timeouts. In other
words, it is the ipc.client.connect.max.retries.on.timeouts to be used
during failover attemptsyarn.client.failover-retries-on-socket-timeouts0The maximum number of completed applications RM keeps. yarn.resourcemanager.max-completed-applications1000Interval at which the delayed token removal thread runsyarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000Maximum size in bytes for configurations that can be provided
by application to RM for delegation token renewal.
By experiment, it's roughly 128 bytes per key-value pair.
The default value 12800 allows roughly 100 configs, may be less.
yarn.resourcemanager.delegation-token.max-conf-size-bytes12800If true, ResourceManager will have proxy-user privileges.
Use case: In a secure cluster, YARN requires the user hdfs delegation-tokens to
do localization and log-aggregation on behalf of the user. If this is set to true,
ResourceManager is able to request new hdfs delegation tokens on behalf of
the user. This is needed by long-running-service, because the hdfs tokens
will eventually expire and YARN requires new valid tokens to do localization
and log-aggregation. Note that to enable this use case, the corresponding
HDFS NameNode has to configure ResourceManager as the proxy-user so that
ResourceManager can itself ask for new tokens on behalf of the user when
tokens are past their max-life-time.yarn.resourcemanager.proxy-user-privileges.enabledfalseInterval for the roll over for the master key used to generate
application tokens
yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs86400Interval for the roll over for the master key used to generate
container tokens. It is expected to be much greater than
yarn.nm.liveness-monitor.expiry-interval-ms and
yarn.resourcemanager.rm.container-allocation.expiry-interval-ms. Otherwise the
behavior is undefined.
yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs86400The heart-beat interval in milliseconds for every NodeManager in the cluster.yarn.resourcemanager.nodemanagers.heartbeat-interval-ms1000The minimum allowed version of a connecting nodemanager. The valid values are
NONE (no version checking), EqualToRM (the nodemanager's version is equal to
or greater than the RM version), or a Version String.yarn.resourcemanager.nodemanager.minimum.versionNONEEnable a set of periodic monitors (specified in
yarn.resourcemanager.scheduler.monitor.policies) that affect the
scheduler.yarn.resourcemanager.scheduler.monitor.enablefalseThe list of SchedulingEditPolicy classes that interact with
the scheduler. A particular module may be incompatible with the
scheduler, other policies, or a configuration of either.yarn.resourcemanager.scheduler.monitor.policiesorg.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicyThe class to use as the configuration provider.
If org.apache.hadoop.yarn.LocalConfigurationProvider is used,
the local configuration will be loaded.
If org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider is used,
the configuration which will be loaded should be uploaded to remote File system first.
yarn.resourcemanager.configuration.provider-classorg.apache.hadoop.yarn.LocalConfigurationProvider
The value specifies the file system (e.g. HDFS) path where ResourceManager
loads configuration if yarn.resourcemanager.configuration.provider-class
is set to org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider.
yarn.resourcemanager.configuration.file-system-based-store/yarn/confThe setting that controls whether yarn system metrics is
published to the Timeline server (version one) or not, by RM.
This configuration is now deprecated in favor of
yarn.system-metrics-publisher.enabled.yarn.resourcemanager.system-metrics-publisher.enabledfalseThe setting that controls whether yarn system metrics is
published on the Timeline service or not by RM And NM.yarn.system-metrics-publisher.enabledfalseThe setting that controls whether yarn container events are
published to the timeline service or not by RM. This configuration setting
is for ATS V2.yarn.rm.system-metrics-publisher.emit-container-eventsfalseNumber of worker threads that send the yarn system metrics
data.yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size10Number of diagnostics/failure messages can be saved in RM for
log aggregation. It also defines the number of diagnostics/failure
messages can be shown in log aggregation web ui.yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory10
RM DelegationTokenRenewer thread count
yarn.resourcemanager.delegation-token-renewer.thread-count50
RM secret key update interval in ms
yarn.resourcemanager.delegation.key.update-interval86400000
RM delegation token maximum lifetime in ms
yarn.resourcemanager.delegation.token.max-lifetime604800000
RM delegation token update interval in ms
yarn.resourcemanager.delegation.token.renew-interval86400000
Thread pool size for RMApplicationHistoryWriter.
yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size10
Comma-separated list of values (in minutes) for schedule queue related
metrics.
yarn.resourcemanager.metrics.runtime.buckets60,300,1440
Interval for the roll over for the master key used to generate
NodeManager tokens. It is expected to be set to a value much larger
than yarn.nm.liveness-monitor.expiry-interval-ms.
yarn.resourcemanager.nm-tokens.master-key-rolling-interval-secs86400
Flag to enable the ResourceManager reservation system.
yarn.resourcemanager.reservation-system.enablefalse
The Java class to use as the ResourceManager reservation system.
By default, is set to
org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityReservationSystem
when using CapacityScheduler and is set to
org.apache.hadoop.yarn.server.resourcemanager.reservation.FairReservationSystem
when using FairScheduler.
yarn.resourcemanager.reservation-system.class
The plan follower policy class name to use for the ResourceManager
reservation system.
By default, is set to
org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacitySchedulerPlanFollower
is used when using CapacityScheduler, and is set to
org.apache.hadoop.yarn.server.resourcemanager.reservation.FairSchedulerPlanFollower
when using FairScheduler.
yarn.resourcemanager.reservation-system.plan.follower
Step size of the reservation system in ms
yarn.resourcemanager.reservation-system.planfollower.time-step1000
The expiry interval for a container
yarn.resourcemanager.rm.container-allocation.expiry-interval-ms600000The hostname of the NM.yarn.nodemanager.hostname0.0.0.0The address of the container manager in the NM.yarn.nodemanager.address${yarn.nodemanager.hostname}:0
The actual address the server will bind to. If this optional address is
set, the RPC and webapp servers will bind to this address and the port specified in
yarn.nodemanager.address and yarn.nodemanager.webapp.address, respectively. This is
most useful for making NM listen to all interfaces by setting to 0.0.0.0.
yarn.nodemanager.bind-hostEnvironment variables that should be forwarded from the NodeManager's environment to the container's.yarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAXEnvironment variables that containers may override rather than use NodeManager's default.yarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZwho will execute(launch) the containers.yarn.nodemanager.container-executor.classorg.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorComma separated List of container state transition listeners.yarn.nodemanager.container-state-transition-listener.classesNumber of threads container manager uses.yarn.nodemanager.container-manager.thread-count20Number of threads collector service uses.yarn.nodemanager.collector-service.thread-count5Number of threads used in cleanup.yarn.nodemanager.delete.thread-count4Max number of OPPORTUNISTIC containers to queue at the
nodemanager.yarn.nodemanager.opportunistic-containers-max-queue-length0
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose YARN application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of YARN applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the YARN applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
yarn.nodemanager.delete.debug-delay-sec0Keytab for NM.yarn.nodemanager.keytab/etc/krb5.keytabList of directories to store localized files in. An
application's localized file directory will be found in:
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
Individual containers' work directories, called container_${contid}, will
be subdirectories of this.
yarn.nodemanager.local-dirs${hadoop.tmp.dir}/nm-local-dirIt limits the maximum number of files which will be localized
in a single local directory. If the limit is reached then sub-directories
will be created and new files will be localized in them. If it is set to
a value less than or equal to 36 [which are sub-directories (0-9 and then
a-z)] then NodeManager will fail to start. For example; [for public
cache] if this is configured with a value of 40 ( 4 files +
36 sub-directories) and the local-dir is "/tmp/local-dir1" then it will
allow 4 files to be created directly inside "/tmp/local-dir1/filecache".
For files that are localized further it will create a sub-directory "0"
inside "/tmp/local-dir1/filecache" and will localize files inside it
until it becomes full. If a file is removed from a sub-directory that
is marked full, then that sub-directory will be used back again to
localize files.
yarn.nodemanager.local-cache.max-files-per-directory8192Address where the localizer IPC is.yarn.nodemanager.localizer.address${yarn.nodemanager.hostname}:8040Address where the collector service IPC is.yarn.nodemanager.collector-service.address${yarn.nodemanager.hostname}:8048Interval in between cache cleanups.yarn.nodemanager.localizer.cache.cleanup.interval-ms600000Target size of localizer cache in MB, per nodemanager. It is
a target retention size that only includes resources with PUBLIC and
PRIVATE visibility and excludes resources with APPLICATION visibility
yarn.nodemanager.localizer.cache.target-size-mb10240Number of threads to handle localization requests.yarn.nodemanager.localizer.client.thread-count5Number of threads to use for localization fetching.yarn.nodemanager.localizer.fetch.thread-count4yarn.nodemanager.container-localizer.java.opts-Xmx256m
The log level for container localizer while it is an independent process.
yarn.nodemanager.container-localizer.log.levelINFO
Where to store container logs. An application's localized log directory
will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
Individual containers' log directories will be below this, in directories
named container_{$contid}. Each container directory will contain the files
stderr, stdin, and syslog generated by that container.
yarn.nodemanager.log-dirs${yarn.log.dir}/userlogs
The permissions settings used for the creation of container
directories when using DefaultContainerExecutor. This follows
standard user/group/all permissions format.
yarn.nodemanager.default-container-executor.log-dirs.permissions710Whether to enable log aggregation. Log aggregation collects
each container's logs and moves these logs onto a file-system, for e.g.
HDFS, after the application completes. Users can configure the
"yarn.nodemanager.remote-app-log-dir" and
"yarn.nodemanager.remote-app-log-dir-suffix" properties to determine
where these logs are moved to. Users can access the logs via the
Application Timeline Server.
yarn.log-aggregation-enablefalseHow long to keep aggregation logs before deleting them. -1 disables.
Be careful set this too small and you will spam the name node.yarn.log-aggregation.retain-seconds-1How long to wait between aggregated log retention checks.
If set to 0 or a negative value then the value is computed as one-tenth
of the aggregated log retention time. Be careful set this too small and
you will spam the name node.yarn.log-aggregation.retain-check-interval-seconds-1Specify which log file controllers we will support. The first
file controller we add will be used to write the aggregated logs.
This comma separated configuration will work with the configuration:
yarn.log-aggregation.file-controller.%s.class which defines the supported
file controller's class. By default, the TFile controller would be used.
The user could override this configuration by adding more file controllers.
To support back-ward compatibility, make sure that we always
add TFile file controller.yarn.log-aggregation.file-formatsTFileClass that supports TFile read and write operations.yarn.log-aggregation.file-controller.TFile.classorg.apache.hadoop.yarn.logaggregation.filecontroller.tfile.LogAggregationTFileController
How long for ResourceManager to wait for NodeManager to report its
log aggregation status. If waiting time of which the log aggregation
status is reported from NodeManager exceeds the configured value, RM
will report log aggregation status for this NodeManager as TIME_OUT
yarn.log-aggregation-status.time-out.ms600000Time in seconds to retain user logs. Only applicable if
log aggregation is disabled
yarn.nodemanager.log.retain-seconds10800Where to aggregate logs to.yarn.nodemanager.remote-app-log-dir/tmp/logsThe remote log dir will be created at
{yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
yarn.nodemanager.remote-app-log-dir-suffixlogsGenerate additional logs about container launches.
Currently, this creates a copy of the launch script and lists the
directory contents of the container work dir. When listing directory
contents, we follow symlinks to a max-depth of 5(including symlinks
which point to outside the container work dir) which may lead to a
slowness in launching containers.
yarn.nodemanager.log-container-debug-info.enabledtrueAmount of physical memory, in MB, that can be allocated
for containers. If set to -1 and
yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
automatically calculated(in case of Windows and Linux).
In other cases, the default is 8192MB.
yarn.nodemanager.resource.memory-mb-1Amount of physical memory, in MB, that is reserved
for non-YARN processes. This configuration is only used if
yarn.nodemanager.resource.detect-hardware-capabilities is set
to true and yarn.nodemanager.resource.memory-mb is -1. If set
to -1, this amount is calculated as
20% of (system memory - 2*HADOOP_HEAPSIZE)
yarn.nodemanager.resource.system-reserved-memory-mb-1Whether physical memory limits will be enforced for
containers.yarn.nodemanager.pmem-check-enabledtrueWhether virtual memory limits will be enforced for
containers.yarn.nodemanager.vmem-check-enabledtrueRatio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
yarn.nodemanager.vmem-pmem-ratio2.1Number of vcores that can be allocated
for containers. This is used by the RM scheduler when allocating
resources for containers. This is not used to limit the number of
CPUs used by YARN containers. If it is set to -1 and
yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
automatically determined from the hardware in case of Windows and Linux.
In other cases, number of vcores is 8 by default.yarn.nodemanager.resource.cpu-vcores-1Flag to determine if logical processors(such as
hyperthreads) should be counted as cores. Only applicable on Linux
when yarn.nodemanager.resource.cpu-vcores is set to -1 and
yarn.nodemanager.resource.detect-hardware-capabilities is true.
yarn.nodemanager.resource.count-logical-processors-as-coresfalseMultiplier to determine how to convert phyiscal cores to
vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
is set to -1(which implies auto-calculate vcores) and
yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The
number of vcores will be calculated as
number of CPUs * multiplier.
yarn.nodemanager.resource.pcores-vcores-multiplier1.0
Thread pool size for LogAggregationService in Node Manager.
yarn.nodemanager.logaggregation.threadpool-size-max100Percentage of CPU that can be allocated
for containers. This setting allows users to limit the amount of
CPU that YARN containers use. Currently functional only
on Linux using cgroups. The default is to use 100% of CPU.
yarn.nodemanager.resource.percentage-physical-cpu-limit100Enable auto-detection of node capabilities such as
memory and CPU.
yarn.nodemanager.resource.detect-hardware-capabilitiesfalseNM Webapp address.yarn.nodemanager.webapp.address${yarn.nodemanager.hostname}:8042
The https adddress of the NM web application.
yarn.nodemanager.webapp.https.address0.0.0.0:8044
The Kerberos keytab file to be used for spnego filter for the NM web
interface.
yarn.nodemanager.webapp.spnego-keytab-file
The Kerberos principal to be used for spnego filter for the NM web
interface.
yarn.nodemanager.webapp.spnego-principalHow often to monitor the node and the containers.
If 0 or negative, monitoring is disabled.yarn.nodemanager.resource-monitor.interval-ms3000Class that calculates current resource utilization.yarn.nodemanager.resource-calculator.classEnable container monitoryarn.nodemanager.container-monitor.enabledtrueHow often to monitor containers. If not set, the value for
yarn.nodemanager.resource-monitor.interval-ms will be used.
If 0 or negative, container monitoring is disabled.yarn.nodemanager.container-monitor.interval-msClass that calculates containers current resource utilization.
If not set, the value for yarn.nodemanager.resource-calculator.class will
be used.yarn.nodemanager.container-monitor.resource-calculator.classFrequency of running node health script.yarn.nodemanager.health-checker.interval-ms600000Script time out period.yarn.nodemanager.health-checker.script.timeout-ms1200000The health check script to run.yarn.nodemanager.health-checker.script.pathThe arguments to pass to the health check script.yarn.nodemanager.health-checker.script.optsFrequency of running disk health checker code.yarn.nodemanager.disk-health-checker.interval-ms120000The minimum fraction of number of disks to be healthy for the
nodemanager to launch new containers. This correspond to both
yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there
are less number of healthy local-dirs (or log-dirs) available, then
new containers will not be launched on this node.yarn.nodemanager.disk-health-checker.min-healthy-disks0.25The maximum percentage of disk space utilization allowed after
which a disk is marked as bad. Values can range from 0.0 to 100.0.
If the value is greater than or equal to 100, the nodemanager will check
for full disk. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage90.0The low threshold percentage of disk space used when a bad disk is
marked as good. Values can range from 0.0 to 100.0. This applies to
yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs.
Note that if its value is more than yarn.nodemanager.disk-health-checker.
max-disk-utilization-per-disk-percentage or not set, it will be set to the same value as
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage.yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentageThe minimum space that must be available on a disk for
it to be used. This applies to yarn.nodemanager.local-dirs and
yarn.nodemanager.log-dirs.yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb0The path to the Linux container executor.yarn.nodemanager.linux-container-executor.pathThe class which should help the LCE handle resources.yarn.nodemanager.linux-container-executor.resources-handler.classorg.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandlerThe cgroups hierarchy under which to place YARN proccesses (cannot contain commas).
If yarn.nodemanager.linux-container-executor.cgroups.mount is false
(that is, if cgroups have been pre-configured) and the YARN user has write
access to the parent directory, then the directory will be created.
If the directory already exists, the administrator has to give YARN
write permissions to it recursively.
This property only applies when the LCE resources handler is set to
CgroupsLCEResourcesHandler.yarn.nodemanager.linux-container-executor.cgroups.hierarchy/hadoop-yarnWhether the LCE should attempt to mount cgroups if not found.
This property only applies when the LCE resources handler is set to
CgroupsLCEResourcesHandler.
yarn.nodemanager.linux-container-executor.cgroups.mountfalseThis property sets the path from which YARN will read the
CGroups configuration. YARN has built-in functionality to discover the
system CGroup mount paths, so use this property only if YARN's automatic
mount path discovery does not work.
The path specified by this property must exist before the NodeManager is
launched.
If yarn.nodemanager.linux-container-executor.cgroups.mount is set to true,
YARN will first try to mount the CGroups at the specified path before
reading them.
If yarn.nodemanager.linux-container-executor.cgroups.mount is set to
false, YARN will read the CGroups at the specified path.
If this property is empty, YARN tries to detect the CGroups location.
Please refer to NodeManagerCgroups.html in the documentation for further
details.
This property only applies when the LCE resources handler is set to
CgroupsLCEResourcesHandler.
yarn.nodemanager.linux-container-executor.cgroups.mount-pathDelay in ms between attempts to remove linux cgroupyarn.nodemanager.linux-container-executor.cgroups.delete-delay-ms20This determines which of the two modes that LCE should use on
a non-secure cluster. If this value is set to true, then all containers
will be launched as the user specified in
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user. If
this value is set to false, then containers will run as the user who
submitted the application.yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-userstrueThe UNIX user that containers will run as when
Linux-container-executor is used in nonsecure mode (a use case for this
is using cgroups) if the
yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users is
set to true.yarn.nodemanager.linux-container-executor.nonsecure-mode.local-usernobodyThe allowed pattern for UNIX user names enforced by
Linux-container-executor when used in nonsecure mode (use case for this
is using cgroups). The default value is taken from /usr/sbin/adduseryarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$This flag determines whether apps should run with strict resource limits
or be allowed to consume spare resources if they need them. For example, turning the
flag on will restrict apps to use only their share of CPU, even if the node has spare
CPU cycles. The default value is false i.e. use available resources. Please note that
turning this flag on may reduce job throughput on the cluster.yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usagefalseComma separated list of runtimes that are allowed when using
LinuxContainerExecutor. The allowed values are default, docker, and
javasandbox.yarn.nodemanager.runtime.linux.allowed-runtimesdefaultThis configuration setting determines the capabilities
assigned to docker containers when they are launched. While these may not
be case-sensitive from a docker perspective, it is best to keep these
uppercase. To run without any capabilites, set this value to
"none" or "NONE"yarn.nodemanager.runtime.linux.docker.capabilitiesCHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITEThis configuration setting determines if
privileged docker containers are allowed on this cluster.
Use with extreme care.yarn.nodemanager.runtime.linux.docker.privileged-containers.allowedfalseThis configuration setting determines who is allowed to run
privileged docker containers on this cluster. Use with extreme care.
yarn.nodemanager.runtime.linux.docker.privileged-containers.aclThe set of networks allowed when launching containers using the
DockerContainerRuntime.yarn.nodemanager.runtime.linux.docker.allowed-container-networkshost,none,bridgeThe network used when launching containers using the
DockerContainerRuntime when no network is specified in the request
. This network must be one of the (configurable) set of allowed container
networks.yarn.nodemanager.runtime.linux.docker.default-container-networkhostProperty to enable docker user remappingyarn.nodemanager.runtime.linux.docker.enable-userremapping.allowedtruelower limit for acceptable uids of user remapped useryarn.nodemanager.runtime.linux.docker.userremapping-uid-threshold1lower limit for acceptable gids of user remapped useryarn.nodemanager.runtime.linux.docker.userremapping-gid-threshold1The mode in which the Java Container Sandbox should run detailed by
the JavaSandboxLinuxContainerRuntime.yarn.nodemanager.runtime.linux.sandbox-modedisabledPermissions for application local directories.yarn.nodemanager.runtime.linux.sandbox-mode.local-dirs.permissionsreadLocation for non-default java policy file.yarn.nodemanager.runtime.linux.sandbox-mode.policyThe group which will run by default without the java security
manager.yarn.nodemanager.runtime.linux.sandbox-mode.whitelist-groupThis flag determines whether memory limit will be set for the Windows Job
Object of the containers launched by the default container executor.yarn.nodemanager.windows-container.memory-limit.enabledfalseThis flag determines whether CPU limit will be set for the Windows Job
Object of the containers launched by the default container executor.yarn.nodemanager.windows-container.cpu-limit.enabledfalse
Interval of time the linux container executor should try cleaning up
cgroups entry when cleaning up a container.
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms1000
The UNIX group that the linux-container-executor should run as.
yarn.nodemanager.linux-container-executor.groupT-file compression types used to compress aggregated logs.yarn.nodemanager.log-aggregation.compression-typenoneThe kerberos principal for the node manager.yarn.nodemanager.principalA comma separated list of services where service name should only
contain a-zA-Z0-9_ and can not start with numbersyarn.nodemanager.aux-servicesNo. of ms to wait between sending a SIGTERM and SIGKILL to a containeryarn.nodemanager.sleep-delay-before-sigkill.ms250Max time to wait for a process to come up when trying to cleanup a containeryarn.nodemanager.process-kill-wait.ms5000The minimum allowed version of a resourcemanager that a nodemanager will connect to.
The valid values are NONE (no version checking), EqualToNM (the resourcemanager's version is
equal to or greater than the NM version), or a Version String.yarn.nodemanager.resourcemanager.minimum.versionNONEMaximum size of contain's diagnostics to keep for relaunching
container case.yarn.nodemanager.container-diagnostics-maximum-size10000Minimum container restart interval in milliseconds.yarn.nodemanager.container-retry-minimum-interval-ms1000Max number of threads in NMClientAsync to process container
management eventsyarn.client.nodemanager-client-async.thread-pool-max-size500Max time to wait to establish a connection to NMyarn.client.nodemanager-connect.max-wait-ms180000Time interval between each attempt to connect to NMyarn.client.nodemanager-connect.retry-interval-ms10000
Max time to wait for NM to connect to RM.
When not set, proxy will fall back to use value of
yarn.resourcemanager.connect.max-wait.ms.
yarn.nodemanager.resourcemanager.connect.max-wait.ms
Time interval between each NM attempt to connect to RM.
When not set, proxy will fall back to use value of
yarn.resourcemanager.connect.retry-interval.ms.
yarn.nodemanager.resourcemanager.connect.retry-interval.ms
Maximum number of proxy connections to cache for node managers. If set
to a value greater than zero then the cache is enabled and the NMClient
and MRAppMaster will cache the specified number of node manager proxies.
There will be at max one proxy per node manager. Ex. configuring it to a
value of 5 will make sure that client will at max have 5 proxies cached
with 5 different node managers. These connections for these proxies will
be timed out if idle for more than the system wide idle timeout period.
Note that this could cause issues on large clusters as many connections
could linger simultaneously and lead to a large number of connection
threads. The token used for authentication will be used only at
connection creation time. If a new token is received then the earlier
connection should be closed in order to use the new token. This and
(yarn.client.nodemanager-client-async.thread-pool-max-size) are related
and should be in sync (no need for them to be equal).
If the value of this property is zero then the connection cache is
disabled and connections will use a zero idle timeout to prevent too
many connection threads on large clusters.
yarn.client.max-cached-nodemanagers-proxies0Enable the node manager to recover after startingyarn.nodemanager.recovery.enabledfalseThe local filesystem directory in which the node manager will
store state when recovery is enabled.yarn.nodemanager.recovery.dir${hadoop.tmp.dir}/yarn-nm-recoveryThe time in seconds between full compactions of the NM state
database. Setting the interval to zero disables the full compaction
cycles.yarn.nodemanager.recovery.compaction-interval-secs3600Whether the nodemanager is running under supervision. A
nodemanager that supports recovery and is running under supervision
will not try to cleanup containers as it exits with the assumption
it will be immediately be restarted and recover containers.yarn.nodemanager.recovery.supervisedfalse
Adjustment to the container OS scheduling priority. In Linux, passed
directly to the nice command.
yarn.nodemanager.container-executor.os.sched.priority.adjustment0
Flag to enable container metrics
yarn.nodemanager.container-metrics.enabletrue
Container metrics flush period in ms. Set to -1 for flush on completion.
yarn.nodemanager.container-metrics.period-ms-1
The delay time ms to unregister container metrics after completion.
yarn.nodemanager.container-metrics.unregister-delay-ms10000
Class used to calculate current container resource utilization.
yarn.nodemanager.container-monitor.process-tree.class
Flag to enable NodeManager disk health checker
yarn.nodemanager.disk-health-checker.enabletrue
Number of threads to use in NM log cleanup. Used when log aggregation
is disabled.
yarn.nodemanager.log.deletion-threads-count4
The Windows group that the windows-container-executor should run as.
yarn.nodemanager.windows-secure-container-executor.groupyarn.nodemanager.aux-services.mapreduce_shuffle.classorg.apache.hadoop.mapred.ShuffleHandlerThe kerberos principal for the proxy, if the proxy is not
running as part of the RM.yarn.web-proxy.principalKeytab for WebAppProxy, if the proxy is not running as part of
the RM.yarn.web-proxy.keytabThe address for the web proxy as HOST:PORT, if this is not
given then the proxy will run as part of the RMyarn.web-proxy.address
CLASSPATH for YARN applications. A comma-separated list
of CLASSPATH entries. When this value is empty, the following default
CLASSPATH for YARN applications would be used.
For Linux:
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
For Windows:
%HADOOP_CONF_DIR%,
%HADOOP_COMMON_HOME%/share/hadoop/common/*,
%HADOOP_COMMON_HOME%/share/hadoop/common/lib/*,
%HADOOP_HDFS_HOME%/share/hadoop/hdfs/*,
%HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*,
%HADOOP_YARN_HOME%/share/hadoop/yarn/*,
%HADOOP_YARN_HOME%/share/hadoop/yarn/lib/*
yarn.application.classpathIndicate what is the current version of the running
timeline service. For example, if "yarn.timeline-service.version" is 1.5,
and "yarn.timeline-service.enabled" is true, it means the cluster will and
should bring up the timeline service v.1.5 (and nothing else).
On the client side, if the client uses the same version of timeline service,
it should succeed. If the client chooses to use a smaller version in spite of this,
then depending on how robust the compatibility story is between versions,
the results may vary.
yarn.timeline-service.version1.0f
In the server side it indicates whether timeline service is enabled or not.
And in the client side, users can enable it to indicate whether client wants
to use timeline service. If it's enabled in the client side along with
security, then yarn client tries to fetch the delegation tokens for the
timeline server.
yarn.timeline-service.enabledfalseThe hostname of the timeline service web application.yarn.timeline-service.hostname0.0.0.0This is default address for the timeline server to start the
RPC server.yarn.timeline-service.address${yarn.timeline-service.hostname}:10200The http address of the timeline service web application.yarn.timeline-service.webapp.address${yarn.timeline-service.hostname}:8188The https address of the timeline service web application.yarn.timeline-service.webapp.https.address${yarn.timeline-service.hostname}:8190
The actual address the server will bind to. If this optional address is
set, the RPC and webapp servers will bind to this address and the port specified in
yarn.timeline-service.address and yarn.timeline-service.webapp.address, respectively.
This is most useful for making the service listen to all interfaces by setting to
0.0.0.0.
yarn.timeline-service.bind-host
Defines the max number of applications could be fetched using REST API or
application history protocol and shown in timeline server web ui.
yarn.timeline-service.generic-application-history.max-applications10000Store class name for timeline store.yarn.timeline-service.store-classorg.apache.hadoop.yarn.server.timeline.LeveldbTimelineStoreEnable age off of timeline store data.yarn.timeline-service.ttl-enabletrueTime to live for timeline store data in milliseconds.yarn.timeline-service.ttl-ms604800000Store file name for leveldb timeline store.yarn.timeline-service.leveldb-timeline-store.path${hadoop.tmp.dir}/yarn/timelineLength of time to wait between deletion cycles of leveldb timeline store in milliseconds.yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms300000Size of read cache for uncompressed blocks for leveldb timeline store in bytes.yarn.timeline-service.leveldb-timeline-store.read-cache-size104857600Size of cache for recently read entity start times for leveldb timeline store in number of entities.yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size10000Size of cache for recently written entity start times for leveldb timeline store in number of entities.yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size10000Handler thread count to serve the client RPC requests.yarn.timeline-service.handler-thread-count10yarn.timeline-service.http-authentication.typesimple
Defines authentication used for the timeline server HTTP endpoint.
Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
yarn.timeline-service.http-authentication.simple.anonymous.allowedtrue
Indicates if anonymous requests are allowed by the timeline server when using
'simple' authentication.
The Kerberos principal for the timeline server.yarn.timeline-service.principalThe Kerberos keytab for the timeline server.yarn.timeline-service.keytab/etc/krb5.keytabComma separated list of UIs that will be hostedyarn.timeline-service.ui-names
Default maximum number of retries for timeline service client
and value -1 means no limit.
yarn.timeline-service.client.max-retries30Client policy for whether timeline operations are non-fatal.
Should the failure to obtain a delegation token be considered an application
failure (option = false), or should the client attempt to continue to
publish information without it (option=true)yarn.timeline-service.client.best-effortfalse
Default retry time interval for timeline servive client.
yarn.timeline-service.client.retry-interval-ms1000
The time period for which timeline v2 client will wait for draining
leftover entities after stop.
yarn.timeline-service.client.drain-entities.timeout.ms2000Enable timeline server to recover state after starting. If
true, then yarn.timeline-service.state-store-class must be specified.
yarn.timeline-service.recovery.enabledfalseStore class name for timeline state store.yarn.timeline-service.state-store-classorg.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStoreStore file name for leveldb state store.yarn.timeline-service.leveldb-state-store.path${hadoop.tmp.dir}/yarn/timelineyarn.timeline-service.entity-group-fs-store.cache-store-classorg.apache.hadoop.yarn.server.timeline.MemoryTimelineStoreCaching storage timeline server v1.5 is using. yarn.timeline-service.entity-group-fs-store.active-dir/tmp/entity-file-history/activeHDFS path to store active application’s timeline datayarn.timeline-service.entity-group-fs-store.done-dir/tmp/entity-file-history/done/HDFS path to store done application’s timeline datayarn.timeline-service.entity-group-fs-store.group-id-plugin-classes
Plugins that can translate a timeline entity read request into
a list of timeline entity group ids, separated by commas.
yarn.timeline-service.entity-group-fs-store.group-id-plugin-classpath
Classpath for all plugins defined in
yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes.
yarn.timeline-service.entity-group-fs-store.summary-storeSummary storage for ATS v1.5org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStoreyarn.timeline-service.entity-group-fs-store.scan-interval-seconds
Scan interval for ATS v1.5 entity group file system storage reader.This
value controls how frequent the reader will scan the HDFS active directory
for application status.
60yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds
Scan interval for ATS v1.5 entity group file system storage cleaner.This
value controls how frequent the reader will scan the HDFS done directory
for stale application data.
3600yarn.timeline-service.entity-group-fs-store.retain-seconds
How long the ATS v1.5 entity group file system storage will keep an
application's data in the done directory.
604800yarn.timeline-service.entity-group-fs-store.leveldb-cache-read-cache-size
Read cache size for the leveldb cache storage in ATS v1.5 plugin storage.
10485760yarn.timeline-service.entity-group-fs-store.app-cache-size
Size of the reader cache for ATS v1.5 reader. This value controls how many
entity groups the ATS v1.5 server should cache. If the number of active
read entity groups is greater than the number of caches items, some reads
may return empty data. This value must be greater than 0.
10yarn.timeline-service.client.fd-flush-interval-secs
Flush interval for ATS v1.5 writer. This value controls how frequent
the writer will flush the HDFS FSStream for the entity/domain.
10yarn.timeline-service.client.fd-clean-interval-secs
Scan interval for ATS v1.5 writer. This value controls how frequent
the writer will scan the HDFS FSStream for the entity/domain.
If the FSStream is stale for a long time, this FSStream will be close.
60yarn.timeline-service.client.fd-retain-secs
How long the ATS v1.5 writer will keep an FSStream open.
If this fsstream does not write anything for this configured time,
it will be close.
300yarn.timeline-service.writer.class
Storage implementation ATS v2 will use for the TimelineWriter service.
org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImplyarn.timeline-service.reader.class
Storage implementation ATS v2 will use for the TimelineReader service.
org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImplyarn.timeline-service.client.internal-timers-ttl-secs
How long the internal Timer Tasks can be alive in writer. If there is no
write operation for this configured time, the internal timer tasks will
be close.
420The setting that controls how often the timeline collector
flushes the timeline writer.yarn.timeline-service.writer.flush-interval-seconds60Time period till which the application collector will be alive
in NM, after the application master container finishes.yarn.timeline-service.app-collector.linger-period.ms1000Time line V2 client tries to merge these many number of
async entities (if available) and then call the REST ATS V2 API to submit.
yarn.timeline-service.timeline-client.number-of-async-entities-to-merge10
The setting that controls how long the final value
of a metric of a completed app is retained before merging into
the flow sum. Up to this time after an application is completed
out-of-order values that arrive can be recognized and discarded at the
cost of increased storage.
yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds
259200000
The default hdfs location for flowrun coprocessor jar.
yarn.timeline-service.hbase.coprocessor.jar.hdfs.location
/hbase/coprocessor/hadoop-yarn-server-timelineservice.jar
The value of this parameter sets the prefix for all tables that are part of
timeline service in the hbase storage schema. It can be set to "dev."
or "staging." if it is to be used for development or staging instances.
This way the data in production tables stays in a separate set of tables
prefixed by "prod.".
yarn.timeline-service.hbase-schema.prefixprod. Optional URL to an hbase-site.xml configuration file to be
used to connect to the timeline-service hbase cluster. If empty or not
specified, then the HBase configuration will be loaded from the classpath.
When specified the values in the specified configuration file will override
those from the ones that are present on the classpath.
yarn.timeline-service.hbase.configuration.file
Whether the shared cache is enabledyarn.sharedcache.enabledfalseThe root directory for the shared cacheyarn.sharedcache.root-dir/sharedcacheThe level of nested directories before getting to the checksum
directories. It must be non-negative.yarn.sharedcache.nested-level3The implementation to be used for the SCM storeyarn.sharedcache.store.classorg.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStoreThe implementation to be used for the SCM app-checkeryarn.sharedcache.app-checker.classorg.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppCheckerA resource in the in-memory store is considered stale
if the time since the last reference exceeds the staleness period.
This value is specified in minutes.yarn.sharedcache.store.in-memory.staleness-period-mins10080Initial delay before the in-memory store runs its first check
to remove dead initial applications. Specified in minutes.yarn.sharedcache.store.in-memory.initial-delay-mins10The frequency at which the in-memory store checks to remove
dead initial applications. Specified in minutes.yarn.sharedcache.store.in-memory.check-period-mins720The address of the admin interface in the SCM (shared cache manager)yarn.sharedcache.admin.address0.0.0.0:8047The number of threads used to handle SCM admin interface (1 by default)yarn.sharedcache.admin.thread-count1The address of the web application in the SCM (shared cache manager)yarn.sharedcache.webapp.address0.0.0.0:8788The frequency at which a cleaner task runs.
Specified in minutes.yarn.sharedcache.cleaner.period-mins1440Initial delay before the first cleaner task is scheduled.
Specified in minutes.yarn.sharedcache.cleaner.initial-delay-mins10The time to sleep between processing each shared cache
resource. Specified in milliseconds.yarn.sharedcache.cleaner.resource-sleep-ms0The address of the node manager interface in the SCM
(shared cache manager)yarn.sharedcache.uploader.server.address0.0.0.0:8046The number of threads used to handle shared cache manager
requests from the node manager (50 by default)yarn.sharedcache.uploader.server.thread-count50The address of the client interface in the SCM
(shared cache manager)yarn.sharedcache.client-server.address0.0.0.0:8045The number of threads used to handle shared cache manager
requests from clients (50 by default)yarn.sharedcache.client-server.thread-count50The algorithm used to compute checksums of files (SHA-256 by
default)yarn.sharedcache.checksum.algo.implorg.apache.hadoop.yarn.sharedcache.ChecksumSHA256ImplThe replication factor for the node manager uploader for the
shared cache (10 by default)yarn.sharedcache.nm.uploader.replication.factor10The number of threads used to upload files from a node manager
instance (20 by default)yarn.sharedcache.nm.uploader.thread-count20
ACL protocol for use in the Timeline server.
security.applicationhistory.protocol.acl
Set to true for MiniYARNCluster unit tests
yarn.is.miniclusterfalse
Set for MiniYARNCluster unit tests to control resource monitoring
yarn.minicluster.control-resource-monitoringfalse
Set to false in order to allow MiniYARNCluster to run tests without
port conflicts.
yarn.minicluster.fixed.portsfalse
Set to false in order to allow the NodeManager in MiniYARNCluster to
use RPC to talk to the RM.
yarn.minicluster.use-rpcfalse
As yarn.nodemanager.resource.memory-mb property but for the NodeManager
in a MiniYARNCluster.
yarn.minicluster.yarn.nodemanager.resource.memory-mb4096
Enable node labels feature
yarn.node-labels.enabledfalse
URI for NodeLabelManager. The default value is
/tmp/hadoop-yarn-${user}/node-labels/ in the local filesystem.
yarn.node-labels.fs-store.root-dir
Set configuration type for node labels. Administrators can specify
"centralized", "delegated-centralized" or "distributed".
yarn.node-labels.configuration-typecentralized
When "yarn.node-labels.configuration-type" is configured with "distributed"
in RM, Administrators can configure in NM the provider for the
node labels by configuring this parameter. Administrators can
configure "config", "script" or the class name of the provider. Configured
class needs to extend
org.apache.hadoop.yarn.server.nodemanager.nodelabels.NodeLabelsProvider.
If "config" is configured, then "ConfigurationNodeLabelsProvider" and if
"script" is configured, then "ScriptNodeLabelsProvider" will be used.
yarn.nodemanager.node-labels.provider
When "yarn.nodemanager.node-labels.provider" is configured with "config",
"Script" or the configured class extends AbstractNodeLabelsProvider, then
periodically node labels are retrieved from the node labels provider. This
configuration is to define the interval period.
If -1 is configured then node labels are retrieved from provider only
during initialization. Defaults to 10 mins.
yarn.nodemanager.node-labels.provider.fetch-interval-ms600000
Interval at which NM syncs its node labels with RM. NM will send its loaded
labels every x intervals configured, along with heartbeat to RM.
yarn.nodemanager.node-labels.resync-interval-ms120000
When "yarn.nodemanager.node-labels.provider" is configured with "config"
then ConfigurationNodeLabelsProvider fetches the partition label from this
parameter.
yarn.nodemanager.node-labels.provider.configured-node-partition
When "yarn.nodemanager.node-labels.provider" is configured with "Script"
then this configuration provides the timeout period after which it will
interrupt the script which queries the Node labels. Defaults to 20 mins.
yarn.nodemanager.node-labels.provider.fetch-timeout-ms1200000
When node labels "yarn.node-labels.configuration-type" is
of type "delegated-centralized", administrators should configure
the class for fetching node labels by ResourceManager. Configured
class needs to extend
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.
RMNodeLabelsMappingProvider.
yarn.resourcemanager.node-labels.provider
When "yarn.node-labels.configuration-type" is configured with
"delegated-centralized", then periodically node labels are retrieved
from the node labels provider. This configuration is to define the
interval. If -1 is configured then node labels are retrieved from
provider only once for each node after it registers. Defaults to 30 mins.
yarn.resourcemanager.node-labels.provider.fetch-interval-ms1800000
Timeout in seconds for YARN node graceful decommission.
This is the maximal time to wait for running containers and applications to complete
before transition a DECOMMISSIONING node into DECOMMISSIONED.
yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs3600
Timeout in seconds of DecommissioningNodesWatcher internal polling.
yarn.resourcemanager.decommissioning-nodes-watcher.poll-interval-secs20The Node Label script to run. Script output Line starting with
"NODE_PARTITION:" will be considered as Node Label Partition. In case of
multiple lines have this pattern, then last one will be considered
yarn.nodemanager.node-labels.provider.script.pathThe arguments to pass to the Node label script.yarn.nodemanager.node-labels.provider.script.opts
Flag to indicate whether the RM is participating in Federation or not.
yarn.federation.enabledfalse
Machine list file to be loaded by the FederationSubCluster Resolver
yarn.federation.machine-list
Class name for SubClusterResolver
yarn.federation.subcluster-resolver.classorg.apache.hadoop.yarn.server.federation.resolver.DefaultSubClusterResolverImpl
Store class name for federation state store
yarn.federation.state-store.classorg.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore
The time in seconds after which the federation state store local cache
will be refreshed periodically
yarn.federation.cache-ttl.secs300The interval that the yarn client library uses to poll the
completion status of the asynchronous API of application client protocol.
yarn.client.application-client-protocol.poll-interval-ms200
The duration (in ms) the YARN client waits for an expected state change
to occur. -1 means unlimited wait time.
yarn.client.application-client-protocol.poll-timeout-ms-1RSS usage of a process computed via
/proc/pid/stat is not very accurate as it includes shared pages of a
process. /proc/pid/smaps provides useful information like
Private_Dirty, Private_Clean, Shared_Dirty, Shared_Clean which can be used
for computing more accurate RSS. When this flag is enabled, RSS is computed
as Min(Shared_Dirty, Pss) + Private_Clean + Private_Dirty. It excludes
read-only shared mappings in RSS computation.
yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabledfalse
URL for log aggregation server
yarn.log.server.url
URL for log aggregation server web service
yarn.log.server.web-service.url
RM Application Tracking URL
yarn.tracking.url.generator
Class to be used for YarnAuthorizationProvider
yarn.authorization-providerDefines how often NMs wake up to upload log files.
The default value is -1. By default, the logs will be uploaded when
the application is finished. By setting this configure, logs can be uploaded
periodically when the application is running. The minimum rolling-interval-seconds
can be set is 3600.
yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds-1
Enable/disable intermediate-data encryption at YARN level. For now,
this only is used by the FileSystemRMStateStore to setup right
file-system security attributes.
yarn.intermediate-data-encryption.enablefalseFlag to enable cross-origin (CORS) support in the NM. This flag
requires the CORS filter initializer to be added to the filter initializers
list in core-site.xml.yarn.nodemanager.webapp.cross-origin.enabledfalse
Defines maximum application priority in a cluster.
If an application is submitted with a priority higher than this value, it will be
reset to this maximum value.
yarn.cluster.max-application-priority0
The default log aggregation policy class. Applications can
override it via LogAggregationContext. This configuration can provide
some cluster-side default behavior so that if the application doesn't
specify any policy via LogAggregationContext administrators of the cluster
can adjust the policy globally.
yarn.nodemanager.log-aggregation.policy.classorg.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AllContainerLogAggregationPolicy
The default parameters for the log aggregation policy. Applications can
override it via LogAggregationContext. This configuration can provide
some cluster-side default behavior so that if the application doesn't
specify any policy via LogAggregationContext administrators of the cluster
can adjust the policy globally.
yarn.nodemanager.log-aggregation.policy.parameters
Enable/Disable AMRMProxyService in the node manager. This service is used to
intercept calls from the application masters to the resource manager.
yarn.nodemanager.amrmproxy.enabledfalse
The address of the AMRMProxyService listener.
yarn.nodemanager.amrmproxy.address0.0.0.0:8049
The number of threads used to handle requests by the AMRMProxyService.
yarn.nodemanager.amrmproxy.client.thread-count25
The comma separated list of class names that implement the
RequestInterceptor interface. This is used by the AMRMProxyService to create
the request processing pipeline for applications.
yarn.nodemanager.amrmproxy.interceptor-class.pipelineorg.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor
Setting that controls whether distributed scheduling is enabled.
yarn.nodemanager.distributed-scheduling.enabledfalse
Setting that controls whether opportunistic container allocation
is enabled.
yarn.resourcemanager.opportunistic-container-allocation.enabledfalse
Number of nodes to be used by the Opportunistic Container Allocator for
dispatching containers during container allocation.
yarn.resourcemanager.opportunistic-container-allocation.nodes-used10
Frequency for computing least loaded NMs.
yarn.resourcemanager.nm-container-queuing.sorting-nodes-interval-ms1000
Comparator for determining node load for Distributed Scheduling.
yarn.resourcemanager.nm-container-queuing.load-comparatorQUEUE_LENGTH
Value of standard deviation used for calculation of queue limit thresholds.
yarn.resourcemanager.nm-container-queuing.queue-limit-stdev1.0f
Min length of container queue at NodeManager.
yarn.resourcemanager.nm-container-queuing.min-queue-length5
Max length of container queue at NodeManager.
yarn.resourcemanager.nm-container-queuing.max-queue-length15
Min queue wait time for a container at a NodeManager.
yarn.resourcemanager.nm-container-queuing.min-queue-wait-time-ms10
Max queue wait time for a container queue at a NodeManager.
yarn.resourcemanager.nm-container-queuing.max-queue-wait-time-ms100
Use container pause as the preemption policy over kill in the container
queue at a NodeManager.
yarn.nodemanager.opportunistic-containers-use-pause-for-preemptionfalse
Error filename pattern, to identify the file in the container's
Log directory which contain the container's error log. As error file
redirection is done by client/AM and yarn will not be aware of the error
file name. YARN uses this pattern to identify the error file and tail
the error log as diagnostics when the container execution returns non zero
value. Filename patterns are case sensitive and should match the
specifications of FileSystem.globStatus(Path) api. If multiple filenames
matches the pattern, first file matching the pattern will be picked.
yarn.nodemanager.container.stderr.pattern{*stderr*,*STDERR*}
Size of the container error file which needs to be tailed, in bytes.
yarn.nodemanager.container.stderr.tail.bytes 4096
Choose different implementation of node label's storage
yarn.node-labels.fs-store.impl.classorg.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore
Enable the CSRF filter for the RM web app
yarn.resourcemanager.webapp.rest-csrf.enabledfalse
Optional parameter that indicates the custom header name to use for CSRF
protection.
yarn.resourcemanager.webapp.rest-csrf.custom-headerX-XSRF-Header
Optional parameter that indicates the list of HTTP methods that do not
require CSRF protection
yarn.resourcemanager.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEAD
Enable the CSRF filter for the NM web app
yarn.nodemanager.webapp.rest-csrf.enabledfalse
Optional parameter that indicates the custom header name to use for CSRF
protection.
yarn.nodemanager.webapp.rest-csrf.custom-headerX-XSRF-Header
Optional parameter that indicates the list of HTTP methods that do not
require CSRF protection
yarn.nodemanager.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEAD
The name of disk validator.
yarn.nodemanager.disk-validatorbasic
Enable the CSRF filter for the timeline service web app
yarn.timeline-service.webapp.rest-csrf.enabledfalse
Optional parameter that indicates the custom header name to use for CSRF
protection.
yarn.timeline-service.webapp.rest-csrf.custom-headerX-XSRF-Header
Optional parameter that indicates the list of HTTP methods that do not
require CSRF protection
yarn.timeline-service.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEAD
Enable the XFS filter for YARN
yarn.webapp.xfs-filter.enabledtrue
Property specifying the xframe options value.
yarn.resourcemanager.webapp.xfs-filter.xframe-optionsSAMEORIGIN
Property specifying the xframe options value.
yarn.nodemanager.webapp.xfs-filter.xframe-optionsSAMEORIGIN
Property specifying the xframe options value.
yarn.timeline-service.webapp.xfs-filter.xframe-optionsSAMEORIGIN
The least amount of time(msec.) an inactive (decommissioned or shutdown) node can
stay in the nodes list of the resourcemanager after being declared untracked.
A node is marked untracked if and only if it is absent from both include and
exclude nodemanager lists on the RM. All inactive nodes are checked twice per
timeout interval or every 10 minutes, whichever is lesser, and marked appropriately.
The same is done when refreshNodes command (graceful or otherwise) is invoked.
yarn.resourcemanager.node-removal-untracked.timeout-ms60000
The RMAppLifetimeMonitor Service uses this value as monitor interval
yarn.resourcemanager.application-timeouts.monitor.interval-ms3000
Defines the limit of the diagnostics message of an application
attempt, in kilo characters (character count * 1024).
When using ZooKeeper to store application state behavior, it's
important to limit the size of the diagnostic messages to
prevent YARN from overwhelming ZooKeeper. In cases where
yarn.resourcemanager.state-store.max-completed-applications is set to
a large number, it may be desirable to reduce the value of this property
to limit the total data stored.
yarn.app.attempt.diagnostics.limit.kc64
Flag to enable cross-origin (CORS) support for timeline service v1.x or
Timeline Reader in timeline service v2. For timeline service v2, also add
org.apache.hadoop.security.HttpCrossOriginFilterInitializer to the
configuration hadoop.http.filter.initializers in core-site.xml.
yarn.timeline-service.http-cross-origin.enabledfalse
Flag to enable cross-origin (CORS) support for timeline service v1.x or
Timeline Reader in timeline service v2. For timeline service v2, also add
org.apache.hadoop.security.HttpCrossOriginFilterInitializer to the
configuration hadoop.http.filter.initializers in core-site.xml.
yarn.timeline-service.http-cross-origin.enabledfalse
The comma separated list of class names that implement the
RequestInterceptor interface. This is used by the RouterClientRMService
to create the request processing pipeline for users.
yarn.router.clientrm.interceptor-class.pipelineorg.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor
Size of LRU cache for Router ClientRM Service and RMAdmin Service.
yarn.router.pipeline.cache-max-size25
The comma separated list of class names that implement the
RequestInterceptor interface. This is used by the RouterRMAdminService
to create the request processing pipeline for users.
yarn.router.rmadmin.interceptor-class.pipelineorg.apache.hadoop.yarn.server.router.rmadmin.DefaultRMAdminRequestInterceptor
The actual address the server will bind to. If this optional address is
set, the RPC and webapp servers will bind to this address and the port specified in
yarn.router.address and yarn.router.webapp.address, respectively. This is
most useful for making Router listen to all interfaces by setting to 0.0.0.0.
yarn.router.bind-host
Comma-separated list of PlacementRules to determine how applications
submitted by certain users get mapped to certain queues. Default is
user-group, which corresponds to UserGroupMappingPlacementRule.
yarn.scheduler.queue-placement-rulesuser-group
The comma separated list of class names that implement the
RequestInterceptor interface. This is used by the RouterWebServices
to create the request processing pipeline for users.
yarn.router.webapp.interceptor-class.pipelineorg.apache.hadoop.yarn.server.router.webapp.DefaultRequestInterceptorREST
The http address of the Router web application.
If only a host is provided as the value,
the webapp will be served on a random port.
yarn.router.webapp.address0.0.0.0:8089
The https address of the Router web application.
If only a host is provided as the value,
the webapp will be served on a random port.
yarn.router.webapp.https.address0.0.0.0:8091
It is TimelineClient 1.5 configuration whether to store active
application’s timeline data with in user directory i.e
${yarn.timeline-service.entity-group-fs-store.active-dir}/${user.name}
yarn.timeline-service.entity-group-fs-store.with-user-dirfalseyarn.resourcemanager.display.per-user-appsfalse
Flag to enable display of applications per user as an admin
configuration.
The type of configuration store to use for scheduler configurations.
Default is "file", which uses file based capacity-scheduler.xml to
retrieve and change scheduler configuration. To enable API based
scheduler configuration, use either "memory" (in memory storage, no
persistence across restarts), "leveldb" (leveldb based storage), or
"zk" (zookeeper based storage). API based configuration is only useful
when using a scheduler which supports mutable configuration. Currently
only capacity scheduler supports this.
yarn.scheduler.configuration.store.classfile
The class to use for configuration mutation ACL policy if using a mutable
configuration provider. Controls whether a mutation request is allowed.
The DefaultConfigurationMutationACLPolicy checks if the requestor is a
YARN admin.
yarn.scheduler.configuration.mutation.acl-policy.classorg.apache.hadoop.yarn.server.resourcemanager.scheduler.DefaultConfigurationMutationACLPolicy
The storage path for LevelDB implementation of configuration store,
when yarn.scheduler.configuration.store.class is configured to be
"leveldb".
yarn.scheduler.configuration.leveldb-store.path${hadoop.tmp.dir}/yarn/system/confstore
The compaction interval for LevelDB configuration store in secs,
when yarn.scheduler.configuration.store.class is configured to be
"leveldb". Default is one day.
yarn.scheduler.configuration.leveldb-store.compaction-interval-secs86400
The max number of configuration change log entries kept in config
store, when yarn.scheduler.configuration.store.class is configured to be
"leveldb" or "zk". Default is 1000 for either.
yarn.scheduler.configuration.store.max-logs1000
ZK root node path for configuration store when using zookeeper-based
configuration store.
yarn.scheduler.configuration.zk-store.parent-path/confstoreyarn.resource-types
The resource types to be used for scheduling. Use resource-types.xml
to specify details about the individual resource types.
Provides an option for client to load supported resource types from RM
instead of depending on local resource-types.xml file.
yarn.client.load.resource-types.from-serverfalseThe http address of the timeline reader web application.yarn.timeline-service.reader.webapp.address${yarn.timeline-service.webapp.address}The https address of the timeline reader web application.yarn.timeline-service.reader.webapp.https.address${yarn.timeline-service.webapp.https.address}
The actual address timeline reader will bind to. If this optional address is
set, the reader server will bind to this address and the port specified in
yarn.timeline-service.reader.webapp.address.
This is most useful for making the service listen to all interfaces by setting to
0.0.0.0.
yarn.timeline-service.reader.bind-host