Introduction to Apache Kafka Training Video

Información importante

Curso

Online

Duración:Flexible

Cuándo:A definir

Campus online

Descripción

The following course, offered by Career vision, will help you improve your skills and achieve your professional goals. During the program you will study different subjects which are deemed to be useful for those who want to enhance their professional career. Sign up for more information!

The ability to learn at your own pace with our intuitive, easy-to-use interface

A quick grasp of even the most complex Introduction to Apache Kafka subjects because they're broken into simple, easy to follow tutorial videos

Practical working files further enhance the learning process and provide a degree of retention that is unmatched by any other form of Introduction to Apache Kafka tutorial, online or offline... so you'll know the exact steps for your own projects.

Currently one of the hottest projects across the Hadoop ecosystem, Apache Kafka is a distributed, real-time data system that functions in a manner similar to a pub/sub messaging service, but with better throughput, built-in partitioning, replication, and fault tolerance. In this video course, host Gwen Shapira from Cloudera shows developers and administrators how to integrate Kafka into a data processing pipeline.

You'll start with Kafka basics, walk through code examples of Kafka producers and consumers, and then learn how to integrate Kafka with Hadoop. By the end of this course, you'll be ready to use this service for large-scale log collection and stream processing.

- Learn Kafka's use cases and the problems that it solves- Understand the basics, including logs, partitions, replicas, consumers, and producers- Set up a Kafka cluster, starting with a single node before adding more- Write producers and consumers, using old and new APIs- Use the Flume log aggregation framework to integrate Kafka with Hadoop- Configure Kafka for availability and consistency, and learn how to troubleshoot various issues- Become familiar with the entire Kafka ecosystem

Gwen Shapira is a software engineer at Cloudera with 15 years of experience working with customers to design scalable data architectures. Working as a data warehouse DBA, ETL developer, and a senior consultant, Gwen specializes in building scalable data processing pipelines and integrating existing data systems with Hadoop. She's a committer to Apache Sqoop and an active contributor to Apache Kafka.