Delivering this course:

Vadim is a cloud and architecture expert. He is the founding partner of DoIT International, and an AWS Certified Solutions Architect and a Google Developer expert. Over the last years, Vadim has helped countless companies to realize their cloud dreams into fully developed deployments based on cloud solutions. Vadim oversees technology and makes the hard stuff simple.

BigData on Amazon Web Services (AWS)

BigData processing on AWS with Hadoop, Spark, RedShift and more explained

Big Data on AWS introduces you to cloud-based big data solutions such as Amazon Elastic MapReduce (EMR), Amazon Redshift, Amazon Kinesis and the rest of the AWS big data platform. In this course, we show you how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Hive and Hue. We also teach you how to create big data environments, work with Amazon DynamoDB, Amazon Redshift, and Amazon Kinesis, and leverage best practices to design big data environments for security and cost-effectiveness.

Objectives

This course teaches you how to:

Fit AWS solutions inside of a big data ecosystem

Leverage Apache Hadoop in the context of Amazon EMR

Identify the components of an Amazon EMR cluster

Launch and configure an Amazon EMR cluster

Leverage common programming frameworks available for Amazon EMR including Hive, Pig, and Streaming

Leverage Hue to improve the ease-of-use of Amazon EMR

Use in-memory analytics with Spark and Spark SQL on Amazon EMR

Choose appropriate AWS data storage options

Identify the benefits of using Amazon Kinesis for near real-time big data processing

Define data warehousing and columnar database concepts

Leverage Amazon Redshift to efficiently store and analyze data

Comprehend and manage costs and security for Amazon EMR and Amazon Redshift deployments