HDFS Cluster

An HDFS cluster is comprised of a NameNode, which manages the cluster metadata, and DataNodes that store the data. Prior to Hadoop 2.0.0, the NameNode was a single point of failure (SPOF) in an HDFS cluster. Each cluster had a single NameNode, and if that machine or process became unavailable, the cluster as a whole would be unavailable .

You can follow the instructions here to format and start HDFS on Hortonworks Data Platform. HDFS can be accessed from applications in many different ways. Natively, HDFS provides a FileSystem Java API for applications to use. A C language wrapper for this Java API is also available. In addition, an HTTP browser can also be used to browse the files of an HDFS instance. Work is in progress to expose HDFS through the WebDAV protocol. For more information, read here.[3,10,11]

Name Node

High-level summary of Name Node which it:

Provides high availability (HA) using redundant Name Nodes[2]

NameNode (active)

Secondary NameNode (standby)

Maintains the following two metadata files (or checkpoint files):

fsimage file

Holds the entire file system namespace,[12] including the mapping of blocks to files and file system properties

editlog file

Holds every change that occurs to the filesystem metadata

Namenode Web UI

To smoke test your NameNode server, you can use the following URL[7,11]

http://$namenode.full.hostname:50070

to determine if you can reach the NameNode server with the browser. If successful, you can also select the Utilities menu to "browse the file system".High Availability

The HDFS High Availability feature (vs. another new HDFS Federation feature) addresses the SPOF problem by providing the option of running two redundant NameNodes in the same cluster in an Active/Passive configuration with a hot standby. This allows a fast failover to a new NameNode in the case that a machine crashes, or a graceful administrator-initiated failover for the purpose of planned maintenance.

If your individual IDs of NameNodes are nn1 and nn2, you can get their service status using the following command:[3]

When NameNode starts up, it reads FsImage and EditLogfiles from disk, merges all the transactions present in the EditLog to the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage.

Metadata files are stored at:

${dfs.namenode.name.dir}/edits

${dfs.namenode.name.dir}/fsimage

where dfs.namenode.name.dir property can be configured in hdfs-site.xml.[8]

Data Node

High-level summary of Data Node:[4]

Scalable Storage

HDFS cluster storage scales horizontally with the addition of DataNodes

Minimal data motion

Hadoop moves compute processes to the data on HDFS and not the other way around.

Processing tasks can occur on the physical node where the data resides, which significantly reduces network I/O and provides very high aggregate bandwidth.

Data Disk Failure一Heartbeats and replication

Each DataNode sends a Heartbeat message to the NameNode periodically.

If NameNode detects a DataNode stop sending Heartbeat message, it marks DataNode as dead and stop forwarding new IO requests to them.

The NameNode constantly tracks which blocks need to be replicated and initiates replication whenever necessary. The necessity for re-replication may arise due to many reasons:

a DataNode may become unavailable

a replica may become corrupted

a hard disk on a DataNode may fail

the replication factor of a file may be increased

Data Rebalancing

HDFS automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold

Data Integrity一checksum

When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace.

Hortonworks Data Platform

You can deploy Hortonworks Data Platform (HDP) using either Apache Ambari or not. If you choose not to use Ambari, you can follow the instructions here. However, it will be much easier to deploy Apache Hadoop stack with Ambari (see the instruction here).

After initial installation and deployment, your Apache Hadoop cluster could still grow and change with use over the time. With Apache Ambari, you can easily and quickly add new services or expand the storage and processing capacity of the cluster.

The ecosystem of Ambari consists of three main components:

Ambari Web

Ambari Server

Serves as the collection point for data from across the cluster

Ambari Agent

Run on each host in the cluster to allow the Ambari Server to control it

Ambari Web

Using the Ambari Web UI and REST APIs, you can deploy, operate,
manage configuration changes, and monitor services for all nodes in your cluster from a
central point.

Ambari Web is a client-side JavaScript application, which calls the Ambari REST API
(accessible from the Ambari Server) to access cluster information and perform cluster
operations. A relational database is used to store the information about the cluster configuration and topology.

With Ambari Views, you can customize the Ambari Web UI. Ambari Views offer a systematic way to plug-in UI capabilities to surface custom visualization, management and monitoring features in Ambari Web.

Ambari Server

Before starting the Ambari Server, you must set up the Ambari Server once. Setup configures Ambari to talk to the Ambari database, installs the JDK and allows you to customize the user account (default: root) the Ambari Server daemon will run as.

After setup, all the configuration is stored in:

/etc/ambari-server/conf/ambari.properties

Then you can run the following commands from the Ambari Server host:

ambari-server start

If you reboot your cluster, you must restart the Ambari Server and all the Ambari Agents manually.

ambari-server status

ambari-server stop

Once started, you can access Ambari using the following URL:

http://{ambari-server-hostname}:8080

from a web browser.

The start script /usr/sbin/ambari-server is a shell script, that set environment variables and kicks off a python script which kicks off a java process (see details here).

Ambari Agent

Ambari Agents will heartbeat to the master every few seconds and will receive
commands from the master in the heartbeat responses. Heartbeat responses
will be the only way for master to send a command to the Agent. The command
will be queued in the action queue, which will be picked up by the action
executioner.

Action executioner will pick the right tool (Puppet, Python, etc) for
execution depending on the command type and action type. Thus the actions
sent in the hearbeat response will be processed asynchronously at the Agent.
The action executioner will put the response or progress messages on the
message queue. The Agent will send everything on the message queue to the
master in the next heartbeat.

Here are the steps you do to install Ambari Agent manually on RHEL/CentOS/Oracle Linux 6:

Install the Ambari Agent on every host in your cluster.

yum install ambari-agent

Using a text editor, configure the Ambari Agent by editing the ambari-agent.ini file as shown below:

vi /etc/ambari-agent/conf/ambari-agent.ini

[server] hostname=

url_port=8440

secured_url_port=8441

Start the Agent on every host in your cluster.

ambari-agent start

The Agent registers with the Server on start.

The Agent should not die if the master suddenly disappears. It
should continue to poll at regular intervals and recover as
needed when the master comes back up:

The Ambari Agent should keep all the necessary information it
planned to send to the master in case of a connection failure
and re-send the information after the master comes back up. It may need to re-register if it was previously in the process of
registering.

Troubleshooting

The first thing to do if you run into trouble is to find the logs. Ambari Agent logs can be found at

Thursday, January 19, 2017

In this article, we will focus on running zookeeper in replicated mode and knowing its basics.

ZooKeeper Service

Apache ZooKeeper can be used in distributed applications (e.g., Yahoo! Message Broker) to enable highly reliable distributed coordination. First example is that you can use zookeeper for the high-availability of Spark standalone masters.[2] A standalone Spark Master can run with recovery mode enabled by using zookeeper and be able to recover state among the available swarm of masters. Another example is that HDFS NameNode HA can be enabled to allow a cluster to be configured such that a NameNode is not a single point of failure.[10] In these cases, HDFS relies on Zookeeper to manage the details of failover.

ZooKeeper Functionalities

ZooKeeper allows distributed processes to coordinate with each other through a shared hierarchical name space of data registers (or znodes), much like a file system. Here are the high-level descriptions of its functionalities:

Provides similar semantics as Google's Chubby for coordinating distributed systems, and being a consistent and highly available key-value store makes it an ideal cluster configuration store and directory of services

Is a centralized coordination service for

maintaining configuration data:

status information

configuration (e.g., security rules)

location information

naming

providing distributed synchronization

providing group services

Provides the following capabilities

Consensus

Group management

Presence protocols

Provides a variety of client bindings is available for a number of languages

In the release

ships with C, Java, Perl and Python client bindings

From the community

check here for a list of client bindings that are available from the community but not yet included in the release

ZooKeeper Architecture

When run in replicated mode, the zookeeper service comprise of an Ensemble of servers. A Zookeeper cluster, Ensemble, consists of a Leader node and followers. A leader node is chosen by consensus within the ensemble. If the leader fails another node will be elected as leader.

The design for the Ensemble requires that all know about each other. Zookeeper servers maintain an in-memory image of the data tree along with a transaction logs and snapshots in a persistent store. The downside to an in-memory database is that the size of the database that zookeeper can manage is limited by memory.

ZooKeeper Servers

To start ZooKeeper you need a configuration file which governs ZooKeeper's behavior. Here is a sample in conf/zoo.cfg:

The entries of the form server.X list the servers that make up the ZooKeeper service. When the server starts up, it knows which server it is by looking for the file myid in the data directory (i.e., dataDir). That file contains the server number, in ASCII.

Note the two port numbers after each server name: " 2888" and "3888". Peers use the former port to connect to other peers. Such a connection is necessary so that peers can communicate, for example, to agree upon the order of updates. More specifically, a ZooKeeper server uses this port to connect followers to the leader. When a new leader arises, a follower opens a TCP connection to the leader using this port. Because the default leader election also uses TCP, we currently require another port for leader election. This is the second port in the server entry.

ZooKeeper Clients

Clients only connect to a single ZooKeeper server. The client maintains a TCP connection through which it sends requests, get responses, gets watch events, and send heartbeats. If the TCP connection to the server breaks, the client will connect to a different server.

In ZooKeeper's configuration file, clientPort (e.g. 2181) is used to specify the port to listen for client connections. There are two ways to connect to ZooKeeper service using:

telnet or nc[6]

ZooKeeper Command Line Interface (CLI)[7]

telnet or nc

You can issue the commands to ZooKeeper via telnet or nc, at the client port. Each command is composed of four letters. For example, command ruok can test if server is running in a non-error state. The server will respond with imok if it is running. Otherwise it will not respond at all.

Disclaimer

The statements and opinions expressed here are my own and do not necessarily represent those of Oracle.For your computer health, follow me @xmlandmore. To improve your personal health, follow me @travel2health.

About Me

Healthy pursuits are like traveling. We know there is a wonderland called wellness. But, there are no fixed routes to reach there. The pursuits need effort and determination. We cannot act like tourists who don't know where they've been. We must take notes of warning signs sent by our bodies.

On the journey to wellness, there are many dangers to be avoided; there are many footsteps to be taken; unfortunately, there are no short-cuts.

Travelers walking in the night follow North Star. Healthy pursuits work similarly. We know our goal; we know the signposts; we know the dangers on the road; we can adjust pace if we are tired; we can change travel plans due to the weather. But, We walk steadily and persistently. With my companionship, hopefully, your journey will be made easier.