Apache Hadoop YARN

OVERVIEW

The Architectural Center of Enterprise Hadoop

Part of the core Hadoop project, YARN is the architectural center of Hadoop that allows multiple data processing engines such as interactive SQL, real-time streaming, data science and batch processing to handle data stored in a single platform, unlocking an entirely new approach to analytics.

YARN is the foundation of the new generation of Hadoop and is enabling organizations everywhere to realize a modern data architecture.

What YARN Does

YARN is the prerequisite for Enterprise Hadoop, providing resource management and a central platform to deliver consistent operations, security, and data governance tools across Hadoop clusters.

YARN also extends the power of Hadoop to incumbent and new technologies found within the data center so that they can take advantage of cost effective, linear-scale storage and processing. It provides ISVs and developers a consistent framework for writing data access applications that run IN Hadoop.

As its architectural center, YARN enhances a Hadoop compute cluster in the following ways:

Feature

Description

Multi-tenancy

YARN allows multiple access engines (either open-source or proprietary) to use Hadoop as the common standard for batch, interactive and real-time engines that can simultaneously access the same data set.

Existing MapReduce applications developed for Hadoop 1 can run YARN without any disruption to existing processes that already work

How YARN Works

YARN’s original purpose was to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities:

a global ResourceManager

a per-application ApplicationMaster

a per-node slave NodeManager

a per-application Container running on a NodeManager

The ResourceManager and the NodeManager formed the new generic system for managing applications in a distributed manner. The ResourceManager is the ultimate authority that arbitrates resources among all applications in the system. The ApplicationMaster is a framework-specific entity that negotiates resources from the ResourceManager and works with the NodeManager(s) to execute and monitor the component tasks.

The ResourceManager has a scheduler, which is responsible for allocating resources to the various applications running in the cluster, according to constraints such as queue capacities and user limits. The scheduler schedules based on the resource requirements of each application.

Each ApplicationMaster has responsibility for negotiating appropriate resource containers from the scheduler, tracking their status, and monitoring their progress. From the system perspective, the ApplicationMaster runs as a normal container.

The NodeManager is the per-machine slave, which is responsible for launching the applications’ containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager.

Hortonworks Focus for YARN

YARN is the central point of investment for Hortonworks within the Apache community. In fact, YARN was originally proposed (MR-279) and architected by one of our founders, Arun Murthy. Our engineers have been working within the Hadoop community to deliver and improve YARN for years. It has matured to become the solid, reliable architectural center of Hadoop and is a foundational component.

While relied upon by thousands, YARN can always be improved, especially with new engines emerging to interact with Hadoop data. To this end, Hortonworks has laid out the following investment themes for this foundational technology.

Focus

Planned Enhancements

Reliable Operations

Support for rolling upgrades: upgrade a YARN cluster without down time

Forums

Hadoop Tutorials

Try these Tutorials

Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]

A very common request from many customers is to be able to index text in image files; for example, text in scanned PNG files. In this tutorial we are going to walkthrough how to do this with SOLR. Prerequisites Download the Hortonworks Sandbox Complete the Learning the Ropes of the HDP Sandbox tutorial. Step-by-step guide […]

Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]

Introduction JReport is a embedded BI reporting tool can easily extract and visualize data from the Hortonworks Data Platform 2.3 using the Apache Hive JDBC driver. You can then create reports, dashboards, and data analysis, which can be embedded into your own applications. In this tutorial we are going to walkthrough the folllowing steps to […]

Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]

Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]

Yet Hortonworks value proposition to Enterprises isn’t one of its software being free — it’s about being 100 percent open source, expanding the Hadoop platform, and being able to support its customers and partners like no one else can.

Hortonworks Inc., whose Hadoop software has been a big driver of the current interest in Big Data, raised $100 million at a valuation of more than $1 billion as it prepares to go public sometime in 2015.

View Past Webinars

The world of IoT is exploding uncontrollably across all industry verticals. Hundreds of IoT use cases have come up in the last couple of years. And, if you are in one of those enterprises implementing an IoT solution, have you wondered how to measure the success of your deployment? Have you assessed your readiness […]

DeepIQ is an industrial data science platform built over open source technologies distributed and supported by Hortonworks. Some of the world’s largest enterprises are leveraging DeepIQ to accelerate their data analytic initiatives by more than 10x and to realize ROI within the first year of their big data program. In this LIVE presentation you […]

With the increase in data volumes and explosion in the number of devices, current cybersecurity tools are challenged in processing the millions of events and providing fast, reliable insights. In this webinar, Matt Koehr, Solutions Engineer with Hortonworks, will discuss the capabilities of the Hortonworks Cybersecurity Platform (HCP) from a public sector perspective, including: […]

Data Lifecycle Manager (DLM) delivers on the promise of location-agnostic, secure replication — enabling data to be encapsulated and copied seamlessly across the physical private storage and public cloud environments for hybrid data mobility to enable right workload in the right environment for the right use case. In this exclusive Premier Inside Out, you […]

Cloud computing promises agility, flexibility, and lower costs. But the promise is only fulfilled if the cloud is part of a comprehensive data strategy based on your unique business needs, use cases, and objectives. Achieving the business outcomes you seek requires creating a unified application environment spanning on-premises and public clouds and offering consistent […]

Join experts from Hortonworks to learn how HDP 3.0 provides: Faster time to deployment and insights with the support of 3rd party apps in Docker containers Hybrid data environment to enable on-premises and cloud deployments, and cloud storage support for AWS S3, WASB, ADLS and GCS (tech preview) Data Science performance improvements around Apache […]

Cloud-computing adoption has been increasing rapidly, with cloud-specific spending expected to grow at more than six times the rate of general IT spending through 2020; 80% of the F500 organizations plan to have more than 10 percent of their workloads in cloud platforms in three years! In this session, we will share examples of […]

Cloud-computing adoption has been increasing rapidly, with cloud-specific spending expected to grow at more than six times the rate of general IT spending through 2020; 80% of the F500 organizations plan to have more than 10 percent of their workloads in cloud platforms in three years! In this session, we will share examples of […]

HDFS was designed with the notion that all metadata is always memory resident. However, as this metadata becomes too large to hold in memory, HDFS starts to struggle. In this exclusive Premier Inside Out, you will hear from Apache committers Arpit Agarwal and Anu Engineer. These Hortonworkers will explain various solutions to this problem, share […]

Successfully establishing a Data Lake can accelerate your firm’s ability to leverage data for competitive advantage, however, delivering these transformational projects requires comprehensive security and governance as well as a simple way to capture and ingest data into your lake. Join the Hortonworks team on this short webinar, where we will share some of our […]

This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.