Stream Processing Training Courses

Stream Processing training is available as "onsite live training" or "remote live training". Luxembourg onsite live Stream Processing trainings can be carried out locally on customer premises or in NobleProg corporate training centers. Remote live training is carried out by way of an interactive, remote desktop.

Stream Processing Subcategories

Stream Processing Course Outlines

Apache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale, and availability.

In this instructor-led, live training, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.

By the end of this training, participants will be able to:

- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.- Achieve persistence without syncing data back to a relational database.- Use Ignite to carry out SQL and distributed joins.- Improve performance by moving data closer to the CPU, using RAM as a storage.- Spread data sets across a cluster to achieve horizontal scalability.- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.

Format of the Course

- Interactive lecture and discussion.- Lots of exercises and practice.- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.

Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.

This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.

By the end of this training, participants will be able to:

- Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.- Build, scale and optimize an Apex application- Process real-time data streams reliably and with minimum latency- Use Apex Core and the Apex Malhar library to enable rapid application development- Use the Apex API to write and re-use existing Java code- Integrate Apex into other applications as a processing engine- Tune, test and scale Apex applications

Audience

- Developers- Enterprise architects

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Apache Beam is an open source, unified programming model for defining and executing parallel data processing pipelines. It's power lies in its ability to run both batch and streaming pipelines, with execution being carried out by one of Beam's supported distributed processing back-ends: Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow. Apache Beam is useful for ETL (Extract, Transform, and Load) tasks such as moving data between different storage media and data sources, transforming data into a more desirable format, and loading data onto a new system.

In this instructor-led, live training (onsite or remote), participants will learn how to implement the Apache Beam SDKs in a Java or Python application that defines a data processing pipeline for decomposing a big data set into smaller chunks for independent, parallel processing.

By the end of this training, participants will be able to:

- Install and configure Apache Beam.- Use a single programming model to carry out both batch and stream processing from withing their Java or Python application.- Execute pipelines across multiple environments.

Audience

- Developers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- This course will be available Scala in the future. Please contact us to arrange.

This instructor-led, live training (onsite or remote) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications.

Apache Flink is an open-source framework for scalable stream and batch data processing.

This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application in Apache Flink.

Stream Processing refers to the real-time processing of "data in motion", that is, performing computations on data as it is being received. Such data is read as continuous streams from data sources such as sensor events, website user activity, financial trades, credit card swipes, click streams, etc. Stream Processing frameworks are able to read large volumes of incoming data and provide valuable insights almost instantaneously.

In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.

By the end of this training, participants will be able to:

- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming- Understand and select the most appropriate framework for the job- Process of data continuously, concurrently, and in a record-by-record fashion- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.- Integrating the most appropriate stream processing library with enterprise applications and microservices

Audience

- Developers- Software architects

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Notes

- To request a customized training for this course, please contact us to arrange.

Kafka Streams is a client-side library for building applications and microservices whose data is passed to and from a Kafka messaging system. Traditionally, Apache Kafka has relied on Apache Spark or Apache Storm to process data between message producers and consumers. By calling the Kafka Streams API from within an application, data can be processed directly within Kafka, bypassing the need for sending the data to a separate cluster for processing.

In this instructor-led, live training, participants will learn how to integrate Kafka Streams into a set of sample Java applications that pass data to and from Apache Kafka for stream processing.

By the end of this training, participants will be able to:

- Understand Kafka Streams features and advantages over other stream processing frameworks- Process stream data directly within a Kafka cluster- Write a Java or Scala application or microservice that integrates with Kafka and Kafka Streams- Write concise code that transforms input Kafka topics into output Kafka topics- Build, package and deploy the application

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Notes

- To request a customized training for this course, please contact us to arrange

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment.

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.- Decouple the handling of messages from an application.- Use Samza to implement near-realtime asynchronous computation.- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Apache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing).

"Storm is for real-time processing what Hadoop is for batch processing!"

In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time.

Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data- Process stream sources such as Twitter and Webserver Logs- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice