Enquiry Form

Big Data Training Courses

Local, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions.

Big Data training is available as "onsite live training" or "remote live training". Onsite live Big Data trainings in the Philippines can be carried out locally on customer premises or in NobleProg corporate training centers. Remote live training is carried out by way of an interactive, remote desktop.

NobleProg -- Your Local Training Provider

Testimonials

★★★★★

★★★★★

I really liked the content / Instructor.

Craig Roberson

Course: Data Visualization

I am a hands-on learner and this was something that he did a lot of.

Lisa Comfort

Course: Data Visualization

I liked the examples.

Peter Coleman

Course: Data Visualization

I liked the examples.

Peter Coleman

Course: Data Visualization

I enjoyed the good real world examples, reviews of existing reports.

Ronald Parrish

Course: Data Visualization

I really was benefit from the willingness of the trainer to share more.

Balaram Chandra Paul

Course: A Practical Introduction to Data Analysis and Big Data

He was interactive.

Suraj

Course: Semantic Web Overview

We know a lot more about the whole environment.

John Kidd

Course: Spark for Developers

The trainer made the class interesting and entertaining which helps quite a bit with all day training.

Ryan Speelman

Course: Spark for Developers

I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.

Course: Spark for Developers

Liked very much the interactive way of learning.

Luigi Loiacono

Course: Data Analysis with Hive/HiveQL

It was a very practical training, I liked the hands-on exercises.

Proximus

Course: Data Analysis with Hive/HiveQL

I was benefit from the good overview, good balance between theory and exercises.

Proximus

Course: Data Analysis with Hive/HiveQL

I enjoyed the dynamic interaction and “hands-on” the subject, thanks to the Virtual Machine, very stimulating!.

Philippe Job

Course: Data Analysis with Hive/HiveQL

Ernesto did a great job explaining the high level concepts of using Spark and its various modules.

Michael Nemerouf

Course: Spark for Developers

I was benefit from the competence and knowledge of the trainer.

Jonathan Puvilland

Course: Data Analysis with Hive/HiveQL

I was benefit from some new and interesting ideas. Meeting and interacting with other attendees.

Michael the trainer is very knowledgeable and skillful about the subject of Big Data and R. He is very flexible and quickly customize the training meeting clients' need. He is also very capable to solve technical and subject matter problems on the go. Fantastic and professional training!.

Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada

Course: Programming with Big Data in R

The tutor, Mr. Michael An, interacted with the audience very well, the instruction was clear. The tutor also go extent to add more information based on the requests from the students during the training.

Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada

It was very hands-on, we spent half the time actually doing things in Clouded/Hardtop, running different commands, checking the system, and so on.
The extra materials (books, websites, etc. .) were really appreciated, we will have to continue to learn.
The installations were quite fun, and very handy, the cluster setup from scratch was really good.

Ericsson

Course: Administrator Training for Apache Hadoop

Richard's training style kept it interesting, the real world examples used helped to drive the concepts home.

Jamie Martin-Royle - NBrown Group

Course: From Data to Decision with Big Data and Predictive Analytics

The content, as I found it very interesting and think it would help me in my final year at University.

Krishan Mistry - NBrown Group

Course: From Data to Decision with Big Data and Predictive Analytics

The hands-on exercises are good for us to appreciate the capability and features of Talent.

Iverson Associates Sdn Bhd

Course: Talend Open Studio for Data Integration

I mostly enjoyed the hands-on Training.

Muraly Muniandy - Iverson Associates Sdn Bhd

Course: Talend Open Studio for Data Integration

I liked the trainer's knowledge and flexibility with discussions slightly off course topic.

Iverson Associates Sdn Bhd

Course: Talend Open Studio for Data Integration

I liked the vast knowledge and experience on topic.

Iverson Associates Sdn Bhd

Course: Talend Open Studio for Data Integration

The trainer was fantastic and really knew his stuff. I learned a lot about the software I didn't know previously which will help a lot at my job!

Steve McPhail - Alberta Health Services - Information Technology

Course: Data Analysis with Hive/HiveQL

The high level principles about Hive, HDFS..

Geert Suys - Proximus Group

Course: Data Analysis with Hive/HiveQL

The handson. The mix practice/theroy

Proximus Group

Course: Data Analysis with Hive/HiveQL

Fulvio was able to grasp our companies business case and was able to correlate with the course material, almost instantly.

Samuel Peeters - Proximus Group

Course: Data Analysis with Hive/HiveQL

Lot of hands-on exercises.

Ericsson

Course: Administrator Training for Apache Hadoop

Ambari management tool. Ability to discuss practical Hadoop experiences from other business case than telecom.

Big Data Course Outlines

The Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.

Data Vault Modeling is a database modeling technique that provides long-term historical storage of data that originates from multiple sources. A data vault stores a single version of the facts, or "all the data, all the time". Its flexible, scalable, consistent and adaptable design encompasses the best aspects of 3rd normal form (3NF) and star schema.

In this instructor-led, live training, participants will learn how to build a Data Vault.

Python is a high-level programming language famous for its clear syntax and code readibility. Spark is a data processing engine used in querying, analyzing, and transforming big data. PySpark allows users to interface Spark with Python.

In this instructor-led, live training, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.

By the end of this training, participants will be able to:

- Learn how to use Spark with Python to analyze Big Data.- Work on exercises that mimic real world circumstances.- Use different tools and techniques for big data analysis using PySpark.

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

- Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation- Implement industrial big data storage and processing solutions for data analysis- Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

- Law Enforcement specialists with a technical background

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

To meet compliance of the regulators, CSPs (Communication service providers) can tap into BigData Analytics which not only help them to meet compliance but within the scope of sameproject they can increase customer satisfaction and thus reduce the churn. In fact sincecompliance is related to Quality of service tied to a contract, any initiative towards meeting thecompliance, will improve the “competitive edge” of the CSPs. Therefore, it is important thatRegulators should be able to advise/guide a set of Big Data analytic practice for CSPs that willbe of mutual benefit between the regulators and CSPs.

Many real world problems can be described in terms of graphs. For example, the Web graph, the social network graph, the train network graph and the language graph. These graphs tend to be extremely large; processing them requires a specialized set of tools and processes -- these tools and processes can be referred to as Graph Computing (also known as Graph Analytics).

In this instructor-led, live training, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a Graph Computing (also known as Graph Analytics) approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.

By the end of this training, participants will be able to:

- Understand how graph data is persisted and traversed.- Select the best framework for a given task (from graph databases to batch processing frameworks.)- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.- View real-world big data problems in terms of graphs, processes and traversals.

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Predictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events.

In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data.

By the end of this training, participants will be able to:

- Create predictive models to analyze patterns in historical and transactional data- Use predictive modeling to identify risks and opportunities- Build mathematical models that capture important trends- Use data from devices and business systems to reduce waste, save time, or cut costs

Audience

- Developers- Engineers- Domain experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment.

Apache SolrCloud is a distributed data processing engine that facilitates the searching and indexing of files on a distributed network.

In this instructor-led, live training, participants will learn how to set up a SolrCloud instance on Amazon AWS.

By the end of this training, participants will be able to:

- Understand SolCloud's features and how they compare to those of conventional master-slave clusters- Configure a SolCloud centralized cluster- Automate processes such as communicating with shards, adding documents to the shards, etc.- Use Zookeeper in conjunction with SolrCloud to further automate processes- Use the interface to manage error reporting- Load balance a SolrCloud installation- Configure SolrCloud for continuous processing and fail-over

Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data- Process stream sources such as Twitter and Webserver Logs- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Apache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale, and availability。

In this instructor-led, live training, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.

By the end of this training, participants will be able to:

- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.- Achieve persistence without syncing data back to a relational database.- Use Ignite to carry out SQL and distributed joins.- Improve performance by moving data closer to the CPU, using RAM as a storage.- Spread data sets across a cluster to achieve horizontal scalability.- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Vespa is an open-source big data processing and serving engine created by Yahoo. It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in real-time.

This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time.

By the end of this training, participants will be able to:

- Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits- Implement Vespa into existing applications involving feature search, recommendations, and personalization- Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.

This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.

By the end of this training, participants will be able to:

- Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.- Build, scale and optimize an Apex application- Process real-time data streams reliably and with minimum latency- Use Apex Core and the Apex Malhar library to enable rapid application development- Use the Apex API to write and re-use existing Java code- Integrate Apex into other applications as a processing engine- Tune, test and scale Apex applications

Audience

- Developers- Enterprise architects

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Alluxio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

Apache Flink is an open-source framework for scalable stream and batch data processing.

This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.- Decouple the handling of messages from an application.- Use Samza to implement near-realtime asynchronous computation.- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Apache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data.

This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment.

By the end of this training, participants will be able to:

- Install and configure Zeppelin- Develop, organize, execute and share data in a browser-based interface- Visualize results without referring to the command line or cluster details- Execute and collaborate on long workflows- Work with any of a number of plug-in language/data-processing-backends, such as Scala (with Apache Spark), Python (with Apache Spark), Spark SQL, JDBC, Markdown and Shell.- Integrate Zeppelin with Spark, Flink and Map Reduce- Secure multi-user instances of Zeppelin with Apache Shiro

Audience

- Data engineers- Data analysts- Data scientists- Software developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Magellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics.

This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.

MonetDB is an open-source database that pioneered the column-store technology approach.

In this instructor-led, live training, participants will learn how to use MonetDB and how to get the most value out of it.

By the end of this training, participants will be able to:

- Understand MonetDB and its features- Install and get started with MonetDB- Explore and perform different functions and tasks in MonetDB- Accelerate the delivery of their project by maximizing MonetDB capabilities

Audience

- Developers- Technical experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

In this instructor-led, live training, participants will learn how to use Pentaho Data Integration's powerful ETL capabilities and rich GUI to manage an entire big data lifecycle, maximizing the value of data to the organization.

By the end of this training, participants will be able to:

- Create, preview, and run basic data transformations containing steps and hops- Configure and secure the Pentaho Enterprise Repository- Harness disparate sources of data and generate a single, unified version of the truth in an analytics-ready format.- Provide results to third-part applications for further processing

Audience

- Data Analyst- ETL developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Stream Processing refers to the real-time processing of "data in motion", that is, performing computations on data as it is being received. Such data is read as continuous streams from data sources such as sensor events, website user activity, financial trades, credit card swipes, click streams, etc. Stream Processing frameworks are able to read large volumes of incoming data and provide valuable insights almost instantaneously.

In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.

By the end of this training, participants will be able to:

- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming- Understand and select the most appropriate framework for the job- Process of data continuously, concurrently, and in a record-by-record fashion- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.- Integrating the most appropriate stream processing library with enterprise applications and microservices

Audience

- Developers- Software architects

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Notes

- To request a customized training for this course, please contact us to arrange.

This instructor-led, live training (onsite or remote) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications.

Apache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS.

Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.

The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.

In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.

Apache Arrow is an open-source in-memory data processing framework. It is often used together with other data science tools for accessing disparate data stores for analysis. It integrates well with other technologies such as GPU databases, machine learning libraries and tools, execution engines, and data visualization frameworks.

In this onsite instructor-led, live training, participants will learn how to integrate Apache Arrow with various Data Science frameworks to access data from disparate data sources.

By the end of this training, participants will be able to:

- Install and configure Apache Arrow in a distributed clustered environment- Use Apache Arrow to access data from disparate data sources- Use Apache Arrow to bypass the need for constructing and maintaining complex ETL pipelines- Analyze data across disparate data sources without having to consolidate it into a centralized repository

Audience

- Data scientists- Data engineers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.

The website is operated by NobleProg Hong Kong Limited, a Franchisee of NobleProg Limited. If you are interested in opening a franchise in your country, please visit https://training-franchise.com for more information.

NobleProg® is a registered trade mark of NobleProg Limited and/or its affiliates.