Apache Flume: Distributed Log Collection for Hadoop - Second Edition

Book Description

Design and implement a series of Flume agents to send streamed data into Hadoop

In Detail

Apache Flume is a distributed, reliable, and available service used to efficiently collect, aggregate, and move large amounts of log data. It is used to stream logs from application servers to HDFS for ad hoc analysis.

This book starts with an architectural overview of Flume and its logical components. It explores channels, sinks, and sink processors, followed by sources and channels. By the end of this book, you will be fully equipped to construct a series of Flume agents to dynamically transport your stream data and logs from your systems into Hadoop.

A step-by-step book that guides you through the architecture and components of Flume covering different approaches, which are then pulled together as a real-world, end-to-end use case, gradually going from the simplest to the most advanced features.

What You Will Learn

Understand the Flume architecture, and also how to download and install open source Flume from Apache

Follow along a detailed example of transporting weblogs in Near Real Time (NRT) to Kibana/Elasticsearch and archival in HDFS

Learn tips and tricks for transporting logs and data in your production environment

Understand and configure the Hadoop File System (HDFS) Sink

Use a morphline-backed Sink to feed data into Solr

Create redundant data flows using sink groups

Configure and use various sources to ingest data

Inspect data records and move them between multiple destinations based on payload content

Transform data en-route to Hadoop and monitor your data flows

Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.