AWS Big Data Bloghttps://aws.amazon.com/blogs/big-data/
Official Big Data Blog of Amazon Web ServicesMon, 19 Mar 2018 23:39:49 +0000en-UShourly1Easily manage table metadata for Presto running on Amazon EMR using the AWS Glue Data Cataloghttps://aws.amazon.com/blogs/big-data/easily-manage-table-metadata-for-presto-running-on-amazon-emr-using-the-aws-glue-data-catalog/
Sat, 10 Mar 2018 19:20:30 +0000f7b57ce55ed758dc89b4cdea0c47e30f1220300dIn this post, we will explore how the AWS Glue Data Catalog addresses discoverability and manageability for table metadata for Presto on Amazon EMR.<p><a href="https://aws.amazon.com/emr/" target="_blank" rel="noopener noreferrer">Amazon EMR</a> empowers many customers to build big data processing applications quickly and cost-effectively, using popular distributed frameworks such as <a href="https://spark.apache.org/documentation.html" target="_blank" rel="noopener noreferrer">Apache Spark</a>, <a href="https://hbase.apache.org/" target="_blank" rel="noopener noreferrer">Apache HBase</a>, <a href="https://prestodb.io/" target="_blank" rel="noopener noreferrer">Presto</a>, and <a href="https://flink.apache.org/" target="_blank" rel="noopener noreferrer">Apache Flink</a>. For organizations that are crafting their analytical applications on Amazon EMR, there is a growing need to keep their data assets organized in an automated fashion. Because datasets tend to grow exponentially, using cataloging tools is essential to automating data discovery and organizing data assets.</p>
<p>AWS Glue Data Catalog provides this essential capability, allowing you to automatically discover and catalog metadata about your data stores in a central repository. Since Amazon EMR 5.8.0, customers have been using the AWS Glue Data Catalog as a metadata store for Apache Hive and Spark SQL applications that are running on Amazon EMR. Starting with Amazon EMR 5.10.0, you can catalog datasets using AWS Glue and run queries using Presto on Amazon EMR from the Hue (Hadoop User Experience) and Apache Zeppelin UIs.</p>
<p>You might wonder what scenarios warrant using Presto running on Amazon EMR and when to choose Amazon Athena (which uses Presto as the query engine under the hood). It is important to note that both are excellent tools for querying massive amounts of data and addressing different needs and use cases.</p>
<p>Amazon Athena provides the easiest way to run interactive queries for data in Amazon S3 without needing to set up or manage any servers. Presto running on Amazon EMR gives you much more flexibility in how you configure and run your queries, providing the ability to federate to other data sources if needed. For example, you might have a use case that requires LDAP authentication for clients such as the Presto CLI or JDBC/ODBC drivers. Or you might have a workflow where you need to join data between different systems like MySQL/Amazon Redshift/Apache Cassandra and Hive. In these examples, Presto running on Amazon EMR is the right tool to use because it can be configured to enable LDAP authentication in addition to the desired database connectors at cluster launch.</p>
<p>Now, let’s look at how metadata management for Presto works with AWS Glue.</p>
<h2>Using an AWS Glue crawler to discover datasets</h2>
<p>The AWS Glue Data Catalog is a reference to the location, schema, and runtime metrics of your datasets. To create this reference metadata, AWS Glue needs to crawl your datasets. In this exercise, we use an <a href="http://docs.aws.amazon.com/glue/latest/dg/add-crawler.html" target="_blank" rel="noopener noreferrer">AWS Glue crawler</a> to populate tables in the Data Catalog for the <a href="http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml" target="_blank" rel="noopener noreferrer">NYC taxi rides</a> dataset.</p>
<p>The following are the steps for adding a crawler:</p>
<ol>
<li>Sign in to the AWS Management Console, and open the AWS Glue console. In the navigation pane, choose <strong>Crawlers</strong>. Then choose <strong>Add crawler</strong>.</li>
<li>On the <strong>Add a data store</strong> page, specify the location of the NYC taxi rides dataset.</li>
</ol>
<p><img class="alignnone size-full wp-image-4566" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto1.png" alt="" width="800" height="495" /></p>
<ol start="3">
<li>In the next step, choose an existing IAM role if one is available, or create a new role. Then choose <strong>Next</strong>.</li>
</ol>
<p><img class="alignnone size-full wp-image-4567" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto2.png" alt="" width="800" height="448" /></p>
<ol start="4">
<li>On the scheduling page, for <strong>Frequency</strong>, choose <strong>Run on demand</strong>.</li>
<li>On the <strong>Configure the crawler’s output</strong> page, choose <strong>Add database</strong>. Specify <strong>blog-db</strong> as the database name. (You can specify a name of your choice, but be sure to choose the correct database name when running queries.)</li>
<li>Follow the remaining steps using the default values to create a crawler.</li>
</ol>
<p><img class="alignnone size-full wp-image-4569" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto3.png" alt="" width="800" height="87" /></p>
<ol start="7">
<li>When the crawler displays the <strong>Ready</strong> state, navigate to the <strong>Databases</strong> (Choose <strong>blog-db</strong> from the list of databases, or search for it by specifying it as a filter, as shown in the following screenshot.) Then choose <strong>Tables</strong>. You should see the three tables created by the crawler, as follows.</li>
</ol>
<p><img class="alignnone size-full wp-image-4570" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto4.png" alt="" width="800" height="215" /></p>
<ol start="8">
<li>(Optional) The discovered data is classified as CSV files. You can optionally <a href="https://aws.amazon.com/blogs/big-data/harmonize-query-and-visualize-data-from-various-providers-using-aws-glue-amazon-athena-and-amazon-quicksight/" target="_blank" rel="noopener noreferrer">convert this data into Parquet format</a> for better response times on your queries.</li>
</ol>
<h2>Launching an Amazon EMR cluster</h2>
<p>With the dataset discovered and organized, we can now walk through different options for launching Presto on an Amazon EMR cluster to use the AWS Glue Data Catalog.</p>
<h3>Option 1: From the Amazon EMR console</h3>
<ol>
<li>On the <a href="https://console.aws.amazon.com/elasticmapreduce" target="_blank" rel="noopener noreferrer">Amazon EMR console</a>, choose <strong>Create cluster</strong>.</li>
<li>In <strong>Create Cluster – Quick Options</strong>, choose EMR release <strong>10.0 or greater</strong>.</li>
<li>Choose <strong>Presto</strong> as an application.</li>
<li>Under <strong>AWS Glue Data Catalog settings</strong>, select <strong>Use for Presto table metadata</strong>.</li>
</ol>
<h3>Option 2: From the AWS CLI</h3>
<ol>
<li>Create a classification configuration as shown following, and save it as a JSON file (presto-emr-config.json).</li>
</ol>
<div class="hide-language">
<pre><code class="lang-json"> [
{
&quot;Classification&quot;: &quot;presto-connector-hive&quot;,
&quot;Properties&quot;: {
&quot;hive.metastore.glue.datacatalog.enabled&quot;: &quot;true&quot;
}
}
]</code></pre>
</div>
<p>&nbsp;</p>
<ol start="2">
<li>Create the cluster using the AWS CLI as follows:</li>
</ol>
<div class="hide-language">
<pre><code class="lang-sql">aws emr create-cluster --name &quot;&lt;your-cluster-name&gt;&quot; --configurations file:///&lt;local-folder&gt;/presto-emr-config.json --release-label emr-5.10.0 --use-default-roles --ec2-attributes KeyName=&lt;your-key-name&gt; --applications Name=Hadoop Name=Spark Name=Hive Name=PRESTO Name=HUE --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=3,InstanceType=m3.xlarge</code></pre>
</div>
<p>&nbsp;</p>
<h2>Running queries with Presto on Amazon EMR</h2>
<p>After you’ve set up the Amazon EMR cluster with Presto, the AWS Glue Data Catalog is available through a default “hive” catalog. To change between the Hive and Glue metastores, you have to manually update hive.properties and restart the Presto server. <a href="http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-ssh.html" target="_blank" rel="noopener noreferrer">Connect to the master node on your EMR cluster using SSH</a>, and run the Presto CLI to start running queries interactively.</p>
<div class="hide-language">
<pre><code class="lang-sql">$ presto-cli --catalog hive </code></pre>
</div>
<p>&nbsp;</p>
<p>Begin with a simple query to sample a few rows:</p>
<div class="hide-language">
<pre><code class="lang-sql">presto&gt; SELECT * FROM “blog-db”.taxi limit 10;</code></pre>
</div>
<p>The query shows a few sample rows as follows:</p>
<p><img class="alignnone size-full wp-image-4571" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto5.png" alt="" width="800" height="159" /></p>
<p>Query the average fare for trips at each hour of the day and for each day of the month on the Parquet version of the taxi dataset.</p>
<div class="hide-language">
<pre><code class="lang-sql">presto&gt; SELECT EXTRACT (HOUR FROM pickup_datetime) AS hour, avg(fare_amount) AS average_fare FROM “blog-db”.taxi_parquet GROUP BY 1 ORDER BY 1;</code></pre>
</div>
<p>The following image shows the results:</p>
<p><img class="alignnone size-full wp-image-4572" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto6.png" alt="" width="800" height="421" /></p>
<p>More interestingly, you can compute the number of trips that gave tips in the 10 percent, 15 percent, or higher percentage range:</p>
<div class="hide-language">
<pre><code class="lang-sql">presto&gt; -- Tip Percent Category
SELECT TipPrctCtgry
, COUNT (DISTINCT TripID) TripCt
FROM
(SELECT TripID
, (CASE
WHEN fare_prct &lt; 0.7 THEN 'FL70'
WHEN fare_prct &lt; 0.8 THEN 'FL80'
WHEN fare_prct &lt; 0.9 THEN 'FL90'
ELSE 'FL100'
END) FarePrctCtgry
, (CASE
WHEN tip_prct &lt; 0.1 THEN 'TL10'
WHEN tip_prct &lt; 0.15 THEN 'TL15'
WHEN tip_prct &lt; 0.2 THEN 'TL20'
ELSE 'TG20'
END) TipPrctCtgry
FROM
(SELECT TripID
, (fare_amount / total_amount) as fare_prct
, (extra / total_amount) as extra_prct
, (mta_tax / total_amount) as tip_prct
, (tolls_amount / total_amount) as mta_taxprct
, (tip_amount / total_amount) as tolls_prct
, (improvement_surcharge / total_amount) as imprv_suchrgprct
, total_amount
FROM
(SELECT *
, (cast(pickup_longitude AS VARCHAR(100)) || '_' || cast(pickup_latitude AS VARCHAR(100))) as TripID
from &quot;blog-db”.taxi_parquet
WHERE total_amount &gt; 0
) as t
) as t
) ct
GROUP BY TipPrctCtgry;</code></pre>
</div>
<p>The results are as follows:</p>
<p><img class="alignnone size-full wp-image-4573" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto7.png" alt="" width="800" height="667" /></p>
<p>While the preceding query is running, navigate to the web interface for Presto on Amazon EMR at &lt;<tt>http://<span style="color: #ff0000"><em>master-public-dns-name</em></span>:8889/</tt>. Here you can look into the query metrics, such as active worker nodes, number of rows read per second, reserved memory, and parallelism.</p>
<p><img class="alignnone size-full wp-image-4574" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto8.png" alt="" width="800" height="431" /></p>
<h3>Running queries in the Presto Editor on Hue</h3>
<p>If you installed <a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hue.html" target="_blank" rel="noopener noreferrer">Hue</a> with your Amazon EMR launch, you can also run queries on Hue’s Presto Editor. On the Amazon EMR Cluster console, choose <strong>Enable Web Connection</strong>, and follow the instructions to access the web interfaces for Hue and Zeppelin.</p>
<p><img class="alignnone size-full wp-image-4575" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto9.png" alt="" width="667" height="81" /></p>
<p>After the web connection is enabled, choose the <strong>Hue</strong> link to open the web interface. At the login screen, if you are the administrator logging in for the first time, type a user name and password to create your Hue superuser account. Then choose <strong>Create account</strong>. Otherwise, type your user name and password and choose <strong>Create account</strong>, or type the credentials provided by your administrator.</p>
<p><img class="alignnone size-full wp-image-4576" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto10.png" alt="" width="585" height="80" /></p>
<p>Choose the Presto Editor from the menu. You can run Presto queries against your tables in the AWS Glue Data Catalog.</p>
<p><img class="alignnone size-full wp-image-4577" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto11.png" alt="" width="587" height="297" /></p>
<p><img class="alignnone size-full wp-image-4578" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/08/EMR_Presto12.png" alt="" width="800" height="468" /></p>
<h2>Conclusion</h2>
<p>Having a shared data catalog for applications on Amazon EMR alleviates a myriad of data-related challenges that organizations face today—including discovery, governance, auditability, and collaboration. In this post, we explored how the AWS Glue Data Catalog addresses discoverability and manageability for table metadata for Presto on Amazon EMR. Go ahead, give this a try, and share your experience with us!</p>
<hr />
<p>&nbsp;</p>
<p><img class="alignnone size-full wp-image-4594" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/14/big_data_01.png" alt="" width="800" height="16" /></p>
<p>&nbsp;</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/custom-log-presto-query-events-on-amazon-emr-for-auditing-and-performance-insights/" target="_blank" rel="noopener noreferrer">Custom Log Presto Query Events on Amazon EMR for Auditing and Performance Insights</a> and <a href="https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/" target="_blank" rel="noopener noreferrer">Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization</a>.</p>
<p><img class="alignnone size-thumbnail wp-image-4341" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs7-150x150.png" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Author</h3>
<div style="width: 100%">
<img class="size-medium wp-image-1367 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2016/12/29/radhika_90.jpg" alt="radhika_90" width="90" height="90" />
<strong>Radhika Ravirala is a Solutions Architect at Amazon Web Services</strong> where she helps customers craft distributed big data applications on the AWS platform. Prior to her cloud journey, she worked as a software engineer and designer for technology companies in Silicon Valley. She holds a M.S in computer science from San Jose State University.
</div>
<p>&nbsp;</p>Improve the Operational Efficiency of Amazon Elasticsearch Service Domains with Automated Alarms Using Amazon CloudWatchhttps://aws.amazon.com/blogs/big-data/improve-the-operational-efficiency-of-amazon-elasticsearch-service-domains-with-automated-alarms-using-amazon-cloudwatch/
Tue, 06 Mar 2018 14:45:06 +00002be2d91f1904e33419cdc2739ec137d5660fa85aA customer has been successfully creating and running multiple Amazon Elasticsearch Service (Amazon ES) domains to support their business users’ search needs across products, orders, support documentation, and a growing suite of similar needs. The service has become heavily used across the organization. This led to some domains running at 100% capacity during peak times, while others began to run low on storage space. Because of this increased usage, the technical teams were in danger of missing their service level agreements. They contacted me for help. This post shows how you can set up automated alarms to warn when domains need attention.<p>A customer has been successfully creating and running multiple <a href="https://aws.amazon.com/elasticsearch-service" target="_blank" rel="noopener noreferrer">Amazon Elasticsearch Service</a> (Amazon ES) domains to support their business users’ search needs across products, orders, support documentation, and a growing suite of similar needs. The service has become heavily used across the organization.&nbsp; This led to some domains running at 100% capacity during peak times, while others began to run low on storage space. Because of this increased usage, the technical teams were in danger of missing their service level agreements.&nbsp; They contacted me for help.</p>
<p>This post shows how you can set up automated alarms to warn when domains need attention.<span id="more-4536"></span></p>
<h2>Solution overview</h2>
<p>Amazon ES is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities along with the availability, scalability, and security that production workloads require.&nbsp; The service offers built-in integrations with a number of other components and AWS services,&nbsp;enabling customers to go from raw data to actionable insights quickly and securely.</p>
<p>One of these other integrated services is <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a>. CloudWatch is a monitoring service for AWS Cloud resources and the applications that you run on AWS. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.</p>
<p>CloudWatch collects metrics for Amazon ES. You can use these metrics to monitor the state of your Amazon ES domains, and set alarms to notify you about high utilization of system resources. &nbsp;For more information, see <a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/es-metricscollected.html" target="_blank" rel="noopener noreferrer">Amazon Elasticsearch Service Metrics and Dimensions</a>.</p>
<p>While the metrics are automatically collected, the missing piece is how to set alarms on these metrics at appropriate levels for each of your domains. This post includes sample Python code to evaluate the current state of your Amazon ES environment, and to set up alarms according to AWS recommendations and best practices.</p>
<p>There are two components to the sample solution:</p>
<ul>
<li><tt><a href="https://github.com/aws-samples/amazon-es-check-cw-alarms/blob/master/es-check-cwalarms.py" target="_blank" rel="noopener noreferrer">es-check-cwalarms.py</a>: This Python script checks the CloudWatch alarms that have been set, for all Amazon ES domains in a given account and region.</tt></li>
<li><tt><a href="https://github.com/aws-samples/amazon-es-check-cw-alarms/blob/master/es-create-cwalarms.py" target="_blank" rel="noopener noreferrer">es-create-cwalarms.py</a>: This Python script sets up a set of CloudWatch alarms for a single given domain.</tt></li>
</ul>
<p>The sample code can also be found in the <a href="https://github.com/aws-samples/amazon-es-check-cw-alarms">amazon-es-check-cw-alarms</a> GitHub repo. The scripts are easy to extend or combine, as described in the section “Extensions and Adaptations”.</p>
<h3>Assessing the current state</h3>
<p>The first script, <tt>es-check-cwalarms.py</tt>, is used to give an overview of the configurations and alarm settings for all the Amazon ES domains in the given region. The script takes the following parameters:</p>
<div class="hide-language">
<pre><code class="lang-python">python es-checkcwalarms.py -h
usage: es-checkcwalarms.py [-h] [-e ESPREFIX] [-n NOTIFY] [-f FREE][-p PROFILE] [-r REGION]
Checks a set of recommended CloudWatch alarms for Amazon Elasticsearch Service domains (optionally, those beginning with a given prefix).
optional arguments:
-h, --help show this help message and exit
-e ESPREFIX, --esprefix ESPREFIX Only check Amazon Elasticsearch Service domains that begin with this prefix.
-n NOTIFY, --notify NOTIFY List of CloudWatch alarm actions; e.g. ['arn:aws:sns:xxxx']
-f FREE, --free FREE Minimum free storage (MB) on which to alarm
-p PROFILE, --profile PROFILE IAM profile name to use
-r REGION, --region REGION AWS region for the domain. Default: us-east-1</code></pre>
</div>
<p>The script first identifies all the domains in the given region (or, optionally, limits them to the subset that begins with a given prefix). It then starts running a set of checks against each one.</p>
<p>The script can be run from the command line or set up as a scheduled Lambda function. For example, for one customer, it was deemed appropriate to regularly run the script to check that alarms were correctly set for all domains. In addition, because configuration changes—cluster size increases to accommodate larger workloads being a common change—might require updates to alarms, this approach allowed the automatic identification of alarms no longer appropriately set as the domain configurations changed.</p>
<p>The output shown below is the output for one domain in my account.</p>
<div class="hide-language">
<pre><code class="lang-python">Starting checks for Elasticsearch domain iotfleet , version is 53
Iotfleet Automated snapshot hour (UTC): 0
Iotfleet Instance configuration: 1 instances; type:m3.medium.elasticsearch
Iotfleet Instance storage definition is: 4 GB; free storage calced to: 819.2 MB
iotfleet Desired free storage set to (in MB): 819.2
iotfleet WARNING: Not using VPC Endpoint
iotfleet WARNING: Does not have Zone Awareness enabled
iotfleet WARNING: Instance count is ODD. Best practice is for an even number of data nodes and zone awareness.
iotfleet WARNING: Does not have Dedicated Masters.
iotfleet WARNING: Neither index nor search slow logs are enabled.
iotfleet WARNING: EBS not in use. Using instance storage only.
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.yellow-Alarm ClusterStatus.yellow
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.red-Alarm ClusterStatus.red
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-CPUUtilization-Alarm CPUUtilization
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-JVMMemoryPressure-Alarm JVMMemoryPressure
iotfleet WARNING: Missing alarm!! ('ClusterIndexWritesBlocked', 'Maximum', 60, 5, 'GreaterThanOrEqualToThreshold', 1.0)
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-AutomatedSnapshotFailure-Alarm AutomatedSnapshotFailure
iotfleet Alarm: Threshold does not match: Test-Elasticsearch-iotfleet-FreeStorageSpace-Alarm Should be: 819.2 ; is 3000.0</code></pre>
</div>
<p>The output messages fall into the following categories:</p>
<ul>
<li><strong>System overview, Informational:</strong> The Amazon ES version and configuration, including instance type and number, storage, automated snapshot hour, etc.</li>
<li><strong>Free storage:</strong> A calculation for the appropriate amount of free storage, based on the recommended 20% of total storage.</li>
<li><strong>Warnings:</strong> best practices that are not being followed for this domain. (For more about this, read on.)</li>
<li><strong>Alarms:</strong> An assessment of the CloudWatch alarms currently set for this domain, against a recommended set.</li>
</ul>
<p>The script contains an array of recommended CloudWatch alarms, based on best practices for these <a href="http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-cloudwatchmetrics" target="_blank" rel="noopener noreferrer">metrics and statistics</a>. Using the array allows alarm parameters (such as free space) to be updated within the code based on current domain statistics and configurations.</p>
<p>For a given domain, the script checks if each alarm has been set. If the alarm is set, it checks whether the values match those in the array <tt>esAlarms</tt>. In the output above, you can see three different situations being reported:</p>
<ul>
<li><strong>Alarm ok; definition matches</strong>. The alarm set for the domain matches the settings in the array.</li>
<li><strong>Alarm: Threshold does not match.</strong> An alarm exists, but the threshold value at which the alarm is triggered does not match.</li>
<li><strong>WARNING: Missing alarm!!</strong> The recommended alarm is missing.</li>
</ul>
<p>All in all, the list above shows that this domain does not have a configuration that adheres to best practices, nor does it have all the recommended alarms.</p>
<h3>Setting up alarms</h3>
<p>Now that you know that the domains in their current state are missing critical alarms, you can correct the situation.</p>
<p>To demonstrate the script, set up a new domain named “ver”, in us-west-2. Specify 1 node, and a 10-GB EBS disk. Also, create an SNS topic in us-west-2 with a name of “sendnotification”, which sends you an email.</p>
<p>Run the second script, <tt>es-create-cwalarms.py</tt>, from the command line. This script creates (or updates) the desired CloudWatch alarms for the specified Amazon ES domain, “ver”.</p>
<div class="hide-language">
<pre><code class="lang-python">python es-create-cwalarms.py -r us-west-2 -e test -c ver -n &quot;['arn:aws:sns:us-west-2:xxxxxxxxxx:sendnotification']&quot;
EBS enabled: True type: gp2 size (GB): 10 No Iops 10240 total storage (MB)
Desired free storage set to (in MB): 2048.0
Creating Test-Elasticsearch-ver-ClusterStatus.yellow-Alarm
Creating Test-Elasticsearch-ver-ClusterStatus.red-Alarm
Creating Test-Elasticsearch-ver-CPUUtilization-Alarm
Creating Test-Elasticsearch-ver-JVMMemoryPressure-Alarm
Creating Test-Elasticsearch-ver-FreeStorageSpace-Alarm
Creating Test-Elasticsearch-ver-ClusterIndexWritesBlocked-Alarm
Creating Test-Elasticsearch-ver-AutomatedSnapshotFailure-Alarm
Successfully finished creating alarms!</code></pre>
</div>
<p>As with the first script, this script contains an array of recommended CloudWatch alarms, based on best practices for <a href="http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-cloudwatchmetrics" target="_blank" rel="noopener noreferrer">these metrics and statistics</a>. This approach allows you to add or modify alarms based on your use case (more on that below).</p>
<p>After running the script, navigate to <strong>Alarms</strong> on the CloudWatch console. You can see the set of alarms set up on your domain.</p>
<p><img class="alignnone size-full wp-image-4549" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/02/Alarms1.png" alt="" width="800" height="214" /></p>
<p>Because the “ver” domain has only a single node, cluster status is yellow, and that alarm is in an “ALARM” state. It’s already sent a notification that the alarm has been triggered.</p>
<h3>What to do when an alarm triggers</h3>
<p>After alarms are set up, you need to identify the correct action to take for each alarm, which depends on the alarm triggered. For ideas, guidance, and additional pointers to supporting documentation, see <a href="https://aws.amazon.com/blogs/database/get-started-with-amazon-elasticsearch-service-set-cloudwatch-alarms-on-key-metrics/" target="_blank" rel="noopener noreferrer">Get Started with Amazon Elasticsearch Service: Set CloudWatch Alarms on Key Metrics</a>. For information about common errors and recovery actions to take, see <a href="https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html" target="_blank" rel="noopener noreferrer">Handling AWS Service Errors</a>.</p>
<p>In most cases, the alarm triggers due to an increased workload. The likely action is to reconfigure the system to handle the increased workload, rather than reducing the incoming workload. Reconfiguring any backend store—a category of systems that includes Elasticsearch—is best performed when the system is quiescent or lightly loaded. Reconfigurations such as setting zone awareness or modifying the disk type cause Amazon ES to enter a “processing” state, potentially disrupting client access.</p>
<p>Other changes, such as increasing the number of data nodes, may cause Elasticsearch to begin moving shards, potentially impacting search performance on these shards while this is happening. These actions should be considered in the context of your production usage. For the same reason I also do not recommend running a script that resets all domains to match best practices.</p>
<p>Avoid the need to reconfigure during heavy workload by setting alarms at a level that allows a considered approach to making the needed changes. For example, if you identify that each weekly peak is increasing, you can reconfigure during a weekly quiet period.</p>
<p>While Elasticsearch can be reconfigured without being quiesced, it is not a best practice to automatically scale it up and down based on usage patterns. Unlike some other AWS services, I recommend against setting a CloudWatch action that automatically reconfigures the system when alarms are triggered.</p>
<p>There are other situations where the planned reconfiguration approach may not work, such as low or zero free disk space causing the domain to reject writes. If the business is dependent on the domain continuing to accept incoming writes and deleting data is not an option, the team may choose to reconfigure immediately.</p>
<h3>Extensions and adaptations</h3>
<p>You may wish to modify the best practices encoded in the scripts for your own environment or workloads. It’s always better to avoid situations where alerts are generated but routinely ignored. All alerts should trigger a review and one or more actions, either immediately or at a planned date. The following is a list of common situations where you may wish to set different alarms for different domains:</p>
<ul>
<li><strong>Dev/test vs. production<br /> </strong>You may have a different set of configuration rules and alarms for your dev environment configurations than for test. For example, you may require zone awareness and dedicated masters for your production environment, but not for your development domains. Or, you may not have any alarms set in dev. For test environments that mirror your potential peak load, test to ensure that the alarms are appropriately triggered.</li>
<li><strong>Differing workloads or SLAs for different domains</strong><br /> You may have one domain with a requirement for superfast search performance, and another domain with a heavy ingest load that tolerates slower search response. Your reaction to slow response for these two workloads is likely to be different, so perhaps the thresholds for these two domains should be set at a different level. In this case, you might add a “max CPU utilization” alarm at 100% for 1 minute for the fast search domain, while the other domain only triggers an alarm when the average has been higher than 60% for 5 minutes. You might also add a “free space” rule with a higher threshold to reflect the need for more space for the heavy ingest load if there is danger that it could fill the available disk quickly.</li>
<li><strong>“Normal” alarms versus “emergency” alarms<br /> </strong>If, for example, free disk space drops to 25% of total capacity, an alarm is triggered that indicates action should be taken as soon as possible, such as cleaning up old indexes or reconfiguring at the next quiet period for this domain. However, if free space drops below a critical level (20% free space), action must be taken immediately in order to prevent Amazon ES from setting the domain to read-only. Similarly, if the “ClusterIndexWritesBlocked” alarm triggers, the domain has already stopped accepting writes, so immediate action is needed. In this case, you may wish to set “laddered” alarms, where one threshold causes an alarm to be triggered to review the current workload for a planned reconfiguration, but a different threshold raises a “DefCon 3” alarm that immediate action is required.</li>
</ul>
<p>The sample scripts provided here are a starting point, intended for you to adapt to your own environment and needs.</p>
<p>Running the scripts one time can identify how far your current state is from your desired state, and create an initial set of alarms. Regularly re-running these scripts can capture changes in your environment over time and adjusting your alarms for changes in your environment and configurations. One customer has set them up to run nightly, and to automatically create and update alarms to match their preferred settings.</p>
<h3>Removing unwanted alarms</h3>
<p>Each CloudWatch alarm <a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank" rel="noopener noreferrer">costs approximately</a> $0.10 per month. You can remove unwanted alarms in the CloudWatch console, under <strong>Alarms</strong>. If you set up a “ver” domain above, remember to remove it to avoid continuing charges.</p>
<h2>Conclusion</h2>
<p>Setting CloudWatch alarms appropriately for your Amazon ES domains can help you avoid suboptimal performance and allow you to respond to workload growth or configuration issues well before they become urgent. This post gives you a starting point for doing so. The additional sleep you’ll get knowing you don’t need to be concerned about Elasticsearch domain performance will allow you to focus on building creative solutions for your business and solving problems for your customers.</p>
<p>Enjoy!</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out&nbsp;<a href="https://aws.amazon.com/blogs/database/analyzing-amazon-elasticsearch-service-slow-logs-using-amazon-cloudwatch-logs-streaming-and-kibana/" target="_blank" rel="noopener noreferrer">Analyzing Amazon Elasticsearch Service Slow Logs Using Amazon CloudWatch Logs Streaming and Kibana</a> and <a href="https://aws.amazon.com/blogs/database/get-started-with-amazon-elasticsearch-service-how-many-shards-do-i-need/" target="_blank" rel="noopener noreferrer">Get Started with Amazon Elasticsearch Service: How Many Shards Do I Need?</a></p>
<p>&nbsp;</p>
<hr />
<h3>About the Author</h3>
<p><strong><img class="size-full wp-image-2719 alignleft" src="https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2018/02/09/Veronika-Megler.jpg" alt="" width="100" height="141" />Dr. Veronika Megler is a senior consultant at Amazon Web Services</strong>. She works with our customers to implement innovative big data, AI and ML projects, helping them accelerate their time-to-value when using AWS.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>Best Practices for Running Apache Kafka on AWShttps://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-kafka-on-aws/
Fri, 02 Mar 2018 17:10:40 +0000d748685fbfa19e10e8590421e763be718f3825bbThe best practices described in this post are based on our experience in running and operating large-scale Kafka clusters on AWS for more than two years. Our intent for this post is to help AWS customers who are currently running Kafka on AWS, and also customers who are considering migrating on-premises Kafka deployments to AWS.<p><em>This post was written in partnership with <a href="https://www.intuit.com/" target="_blank" rel="noopener noreferrer">Intuit</a> to share learnings, best practices, and recommendations for running an </em><a href="https://kafka.apache.org/" target="_blank" rel="noopener noreferrer"><em>Apache Kafka</em></a><em> cluster on AWS. Thanks to Vaishak Suresh and his colleagues at Intuit for their contribution and support.</em></p>
<p>Intuit, in their own words: <em>Intuit</em><em>, a leading enterprise customer for AWS, is a creator of business and financial management solutions. For more information on how Intuit partners with AWS, see our previous blog post, </em><a href="https://aws.amazon.com/blogs/big-data/real-time-stream-processing-using-apache-spark-streaming-and-apache-kafka-on-aws/" target="_blank" rel="noopener noreferrer"><em>Real-time Stream Processing Using Apache Spark Streaming and Apache Kafka on AWS</em></a><em>. Apache Kafka</em><em> is an open-source, distributed streaming platform that enables you to build real-time streaming applications. </em></p>
<p>The best practices described in this post are based on our experience in running and operating large-scale Kafka clusters on AWS for more than two years. Our intent for this post is to help AWS customers who are currently running Kafka on AWS, and also customers who are considering migrating on-premises Kafka deployments to AWS.<span id="more-4518"></span></p>
<p>AWS offers <a href="https://aws.amazon.com/kinesis/data-streams/" target="_blank" rel="noopener noreferrer">Amazon Kinesis Data Streams</a>, a Kafka alternative that is fully managed.</p>
<p>Running your Kafka deployment on <a href="https://aws.amazon.com/ec2/" target="_blank" rel="noopener noreferrer">Amazon EC2</a> provides a high performance, scalable solution for ingesting streaming data. AWS offers many different <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance types</a> and storage option combinations for Kafka deployments. However, given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.</p>
<p>In this blog post, we cover the following aspects of running Kafka clusters on AWS:</p>
<ul>
<li>Deployment considerations and patterns</li>
<li>Storage options</li>
<li>Instance types</li>
<li>Networking</li>
<li>Upgrades</li>
<li>Performance tuning</li>
<li>Monitoring</li>
<li>Security</li>
<li>Backup and restore</li>
</ul>
<p>Note: While implementing Kafka clusters in a production environment, make sure also to consider factors like your number of messages, message size, monitoring, failure handling, and any operational issues.</p>
<h2><strong>Deployment considerations and </strong><strong>p</strong><strong>atterns</strong></h2>
<p>In this section, we discuss various deployment options available for Kafka on AWS, along with pros and cons of each option. A successful deployment starts with thoughtful consideration of these options. Considering availability, consistency, and operational overhead of the deployment helps when choosing the right option.</p>
<h3>Single AWS Region, Three Availability Zones, All Active</h3>
<p>One typical deployment pattern (all active) is in a single AWS Region with three Availability Zones (AZs). One Kafka cluster is deployed in each AZ along with Apache ZooKeeper and Kafka producer and consumer instances as shown in the illustration following.</p>
<p><img class="size-full wp-image-4541 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/02/Kafka1_600px.png" alt="" width="600" height="477" /></p>
<p>In this pattern, this is the Kafka cluster deployment:</p>
<ul>
<li>Kafka producers and Kafka cluster are deployed on each AZ.</li>
<li>Data is distributed evenly across three Kafka clusters by using Elastic Load Balancer.</li>
<li>Kafka consumers aggregate data from all three Kafka clusters.</li>
</ul>
<p>Kafka cluster failover occurs this way:</p>
<ul>
<li>Mark down all Kafka producers</li>
<li>Stop consumers</li>
<li>Debug and restack Kafka</li>
<li>Restart consumers</li>
<li>Restart Kafka producers</li>
</ul>
<p>Following are the pros and cons of this pattern.</p>
<table style="height: 228px" border="1" width="1031" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Pros</strong></span></td>
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Cons</strong></span></td>
</tr>
<tr>
<td width="295">
<ul>
<li>Highly available</li>
<li>Can sustain the failure of two AZs</li>
<li>No message loss during failover</li>
<li>Simple deployment</li>
</ul> <p><strong>&nbsp;</strong></p></td>
<td width="295">
<ul>
<li>Very high operational overhead:
<ul>
<li>All changes need to be deployed three times, one for each Kafka cluster</li>
<li>Maintaining and monitoring three Kafka clusters</li>
<li>Maintaining and monitoring three consumer clusters</li>
</ul> </li>
</ul> </td>
</tr>
</tbody>
</table>
<p>A restart is required for patching and upgrading brokers in a Kafka cluster. In this approach, a rolling upgrade is done separately for each cluster.</p>
<h3>Single Region, Three Availability Zones, Active-Standby</h3>
<p>Another typical deployment pattern (active-standby) is in a single AWS Region with a single Kafka cluster and Kafka brokers and Zookeepers distributed across three AZs. Another similar Kafka cluster acts as a standby as shown in the illustration following. You can use Kafka mirroring with MirrorMaker to replicate messages between any two clusters.</p>
<p><img class="size-full wp-image-4523 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/02/Kafka2.png" alt="" width="800" height="379" /></p>
<p>In this pattern, this is the Kafka cluster deployment:</p>
<ul>
<li>Kafka producers are deployed on all three AZs.</li>
<li>Only one Kafka cluster is deployed across three AZs (active).</li>
<li>ZooKeeper instances are deployed on each AZ.</li>
<li>Brokers are spread evenly across all three AZs.</li>
<li>Kafka consumers can be deployed across all three AZs.</li>
<li>Standby Kafka producers and a Multi-AZ Kafka cluster are part of the deployment.</li>
</ul>
<p>Kafka cluster failover occurs this way:</p>
<ul>
<li>Switch traffic to standby Kafka producers cluster and Kafka cluster.</li>
<li>Restart consumers to consume from standby Kafka cluster.</li>
</ul>
<p>Following are the pros and cons of this pattern.</p>
<table style="height: 196px" border="1" width="1188" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Pros</strong></span></td>
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Cons</strong></span></td>
</tr>
<tr>
<td width="295">
<ul>
<li>Less operational overhead when compared to the first option</li>
<li>Only one Kafka cluster to manage and consume data from</li>
<li>Can handle single AZ failures without activating a standby Kafka cluster</li>
</ul> </td>
<td width="295">
<ul>
<li>Added latency due to cross-AZ data transfer among Kafka brokers</li>
<li>For Kafka versions before 0.10, replicas for topic partitions have to be assigned so they’re distributed to the brokers on different AZs (rack-awareness)</li>
<li>The cluster can become unavailable in case of a network glitch, where ZooKeeper does not see Kafka brokers</li>
<li>Possibility of in-transit message loss during failover</li>
</ul> </td>
</tr>
</tbody>
</table>
<p>Intuit recommends using a single Kafka cluster in one AWS Region, with brokers distributing across three AZs (single region, three AZs). This approach offers stronger fault tolerance than otherwise, because a failed AZ won’t cause Kafka downtime.</p>
<h2>Storage options</h2>
<p>There are two storage options for file storage in Amazon EC2:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html" target="_blank" rel="noopener noreferrer">Ephemeral storage (instance store)</a></li>
<li><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html" target="_blank" rel="noopener noreferrer">Amazon Elastic Block Store (Amazon EBS)</a></li>
</ul>
<p>Ephemeral storage is local to the Amazon EC2 instance. It can provide high IOPS based on the instance type. On the other hand, Amazon EBS volumes offer higher resiliency and you can configure IOPS based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. Your choice of storage is closely related to the type of workload supported by your Kafka cluster.</p>
<p>Kafka provides built-in fault tolerance by replicating data partitions across a configurable number of instances. If a broker fails, you can recover it by fetching all the data from other brokers in the cluster that host the other replicas. Depending on the size of the data transfer, it can affect recovery process and network traffic. These in turn eventually affect the cluster’s performance.</p>
<p>The following table contrasts the benefits of using an instance store versus using EBS for storage.</p>
<table style="height: 403px" border="1" width="1032" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Instance store</strong></span></td>
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>EBS</strong></span></td>
</tr>
<tr>
<td width="295">
<ul>
<li>Instance storage is recommended for large- and medium-sized Kafka clusters. For a large cluster, read/write traffic is distributed across a high number of brokers, so the loss of a broker has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important, but a failed broker takes longer and requires more network traffic for a smaller Kafka cluster.</li>
<li>Storage-optimized instances like h1, i3, and d2 are an ideal choice for distributed applications like Kafka.</li>
</ul> <p><strong>&nbsp;</strong></p></td>
<td width="295">
<ul>
<li>The primary advantage of using EBS in a Kafka deployment is that it significantly reduces data-transfer traffic when a broker fails or must be replaced. The replacement broker joins the cluster much faster.</li>
<li>Data stored on EBS is persisted in case of an instance failure or termination. The broker’s data stored on an EBS volume remains intact, and you can mount the EBS volume to a new EC2 instance. Most of the replicated data for the replacement broker is already available in the EBS volume and need not be copied over the network from another broker. Only the changes made after the original broker failure need to be transferred across the network. That makes this process much faster.</li>
</ul> <p>&nbsp;</p> <p>&nbsp;</p></td>
</tr>
</tbody>
</table>
<p>Intuit chose EBS because of their frequent instance restacking requirements and also other benefits provided by EBS.</p>
<p>Generally, Kafka deployments use a replication factor of three. EBS offers replication within their service, so Intuit chose a replication factor of two instead of three.</p>
<h2>Instance types</h2>
<p>The choice of <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance types</a> is generally driven by the type of storage required for your streaming applications on a Kafka cluster. If your application requires ephemeral storage, h1<strong>, </strong>i3, and d2 instances are your best option.</p>
<p>Intuit used r3.xlarge instances for their brokers and r3.large for ZooKeeper, with <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html" target="_blank" rel="noopener noreferrer">ST1 (throughput optimized HDD) EBS</a> for their Kafka cluster.</p>
<p>Here are sample benchmark numbers from Intuit tests.</p>
<table style="height: 33px" border="1" width="1040" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Configuration</strong></span></td>
<td style="text-align: center" width="295"><span style="color: #ffffff"><strong>Broker bytes (MB/s)</strong></span></td>
</tr>
<tr>
<td width="295">
<ul>
<li>r3.xlarge</li>
<li>ST1 EBS</li>
<li>12 brokers</li>
<li>12 partitions</li>
</ul> <p><strong>&nbsp;</strong></p></td>
<td width="295">Aggregate 346.9</td>
</tr>
</tbody>
</table>
<p>If you need EBS storage, then AWS has a newer-generation r4 instance. The r4 instance is superior to R3 in many ways:</p>
<ul>
<li>It has a faster processor (Broadwell).</li>
<li>EBS is optimized by default.</li>
<li>It features networking based on Elastic Network Adapter (ENA), with up to 10 Gbps on smaller sizes.</li>
<li>It costs 20 percent less than R3.</li>
</ul>
<p>Note: It’s always best practice to check for <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">the latest changes in instance types</a>.</p>
<h2>Networking</h2>
<p>The network plays a very important role in a distributed system like Kafka. A fast and reliable network ensures that nodes can communicate with each other easily. The available network throughput controls the maximum amount of traffic that Kafka can handle. Network throughput, combined with disk storage, is often the governing factor for cluster sizing.</p>
<p>If you expect your cluster to receive high read/write traffic, select an <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance type</a> that offers 10-Gb/s performance.</p>
<p>In addition, choose an option that keeps interbroker network traffic on the private subnet, because this approach allows clients to connect to the brokers. Communication between brokers and clients uses the same network interface and port. For more details, see <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html" target="_blank" rel="noopener noreferrer">the documentation about IP addressing for EC2 instances</a>.</p>
<p>If you are deploying in more than one AWS Region, you can connect the two VPCs in the two AWS Regions using <a href="https://aws.amazon.com/answers/networking/aws-multiple-region-multi-vpc-connectivity/" target="_blank" rel="noopener noreferrer">cross-region VPC peering</a>. However, be aware of the <a href="https://aws.amazon.com/ec2/pricing/" target="_blank" rel="noopener noreferrer">networking costs</a> associated with cross-AZ deployments.</p>
<h2>Upgrades</h2>
<p>Kafka has a history of not being backward compatible, but its support of backward compatibility is getting better. During a Kafka upgrade, you should keep your producer and consumer clients on a version equal to or lower than the version you are upgrading from. After the upgrade is finished, you can start using a new protocol version and any new features it supports. There are three upgrade approaches available, discussed following.</p>
<h3>Rolling or in-place upgrade</h3>
<p>In a rolling or in-place upgrade scenario, upgrade one Kafka broker at a time. Take into consideration the recommendations for doing rolling restarts to avoid downtime for end users.</p>
<h3>Downtime upgrade</h3>
<p>If you can afford the downtime, you can take your entire cluster down, upgrade each Kafka broker, and then restart the cluster.</p>
<h3>Blue/green upgrade</h3>
<p>Intuit followed the blue/green deployment model for their workloads, as described following.</p>
<p>If you can afford to create a separate Kafka cluster and upgrade it, we highly recommend the blue/green upgrade scenario. In this scenario, we recommend that you keep your clusters up-to-date with the latest Kafka version. For additional details on Kafka version upgrades or more details, see the <a href="https://kafka.apache.org/documentation/#upgrade" target="_blank" rel="noopener noreferrer">Kafka upgrade documentation</a>.</p>
<p>The following illustration shows a blue/green upgrade.</p>
<p><img class="size-full wp-image-4527 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/02/Kafka3_png.png" alt="" width="607" height="445" /></p>
<p>In this scenario, the upgrade plan works like this:</p>
<ul>
<li>Create a new Kafka cluster on AWS.</li>
<li>Create a new Kafka producers stack to point to the new Kafka cluster.</li>
<li>Create topics on the new Kafka cluster.</li>
<li>Test the green deployment end to end (sanity check).</li>
<li>Using <a href="https://aws.amazon.com/route53/" target="_blank" rel="noopener noreferrer">Amazon Route 53</a>, change the new Kafka producers stack on AWS to point to the new green Kafka environment that you have created.</li>
</ul>
<p>The roll-back plan works like this:</p>
<ul>
<li>Switch Amazon Route 53 to the old Kafka producers stack on AWS to point to the old Kafka environment.</li>
</ul>
<p>For additional details on blue/green deployment architecture using Kafka, see the re:Invent presentation <a href="https://www.slideshare.net/AmazonWebServices/app307-leverage-the-cloud-with-a-bluegreen-deployment-architecture-aws-reinvent-2014" target="_blank" rel="noopener noreferrer">Leveraging the Cloud with a Blue-Green Deployment Architecture</a>.</p>
<h2>Performance tuning</h2>
<p>You can tune Kafka performance in multiple dimensions. Following are some best practices for performance tuning.</p>
<p><strong>&nbsp;</strong>These are some general performance tuning techniques:</p>
<ul>
<li>If throughput is less than network capacity, try the following:
<ul>
<li>Add more threads</li>
<li>Increase batch size</li>
<li>Add more producer instances</li>
<li>Add more partitions</li>
</ul> </li>
<li>To improve latency when <tt>acks =-1</tt>, increase your <tt>num.replica.fetches</tt> value.</li>
<li>For cross-AZ data transfer, tune your buffer settings for sockets and for OS TCP.</li>
</ul>
<ul>
<li>Make sure that <tt>num.io.threads</tt>&nbsp;is greater than the number of disks dedicated for Kafka.</li>
<li>Adjust <tt>num.network.threads</tt>&nbsp;based on the number of producers plus the number of consumers plus the replication factor.</li>
<li>Your message size affects your network bandwidth. To get higher performance from a Kafka cluster, select an <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance type</a> that offers 10 Gb/s performance.</li>
</ul>
<p>For Java and JVM tuning, try the following:</p>
<ul>
<li>Minimize GC pauses by using the Oracle JDK, which uses the new G1 garbage-first collector.</li>
<li>Try to keep the Kafka heap size below 4 GB.</li>
</ul>
<h2>Monitoring</h2>
<p>Knowing whether a Kafka cluster is working correctly in a production environment is critical. Sometimes, just knowing that the cluster is up is enough, but Kafka applications have many moving parts to monitor. In fact, it can easily become confusing to understand what’s important to watch and what you can set aside. Items to monitor range from simple metrics about the overall rate of traffic, to producers, consumers, brokers, controller, ZooKeeper, topics, partitions, messages, and so on.</p>
<p>For monitoring, Intuit used several tools, including Newrelec, Wavefront, <a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener noreferrer">Amazon CloudWatch</a>, and <a href="https://aws.amazon.com/cloudtrail/" target="_blank" rel="noopener noreferrer">AWS CloudTrail</a>. Our recommended monitoring approach follows.</p>
<p>For system metrics, we recommend that you monitor:</p>
<ul>
<li>CPU load</li>
<li>Network metrics</li>
<li>File handle usage</li>
<li>Disk space</li>
<li>Disk I/O performance</li>
<li>Garbage collection</li>
<li>ZooKeeper</li>
</ul>
<p>For producers, we recommend that you monitor:</p>
<ul>
<li><tt>Batch-size-avg</tt></li>
<li><tt>Compression-rate-avg</tt></li>
<li><tt>Waiting-threads</tt></li>
<li><tt>Buffer-available-bytes</tt></li>
<li><tt>Record-queue-time-max</tt></li>
<li><tt>Record-send-rate</tt></li>
<li><tt>Records-per-request-avg</tt></li>
</ul>
<p>For consumers, we recommend that you monitor:</p>
<ul>
<li><tt>Batch-size-avg</tt></li>
<li><tt>Compression-rate-avg</tt></li>
<li><tt>Waiting-threads</tt></li>
<li><tt>Buffer-available-bytes</tt></li>
<li><tt>Record-queue-time-max</tt></li>
<li><tt>Record-send-rate</tt></li>
<li><tt>Records-per-request-avg</tt></li>
</ul>
<h2>Security</h2>
<p>Like most distributed systems, Kafka provides the mechanisms to transfer data with relatively high security across the components involved. Depending on your setup, security might involve different services such as encryption, Kerberos, Transport Layer Security (TLS) certificates, and advanced access control list (ACL) setup in brokers and ZooKeeper. The following tells you more about the Intuit approach. For details on Kafka security not covered in this section, see the <a href="http://kafka.apache.org/documentation.html#security" target="_blank" rel="noopener noreferrer">Kafka documentation</a>.</p>
<h3>Encryption at rest</h3>
<p>For EBS-backed EC2 instances, you can enable encryption at rest by using Amazon EBS volumes with encryption enabled. Amazon EBS uses <a href="https://aws.amazon.com/kms/" target="_blank" rel="noopener noreferrer">AWS Key Management Service (AWS KMS)</a> for encryption. For more details, see <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html" target="_blank" rel="noopener noreferrer">Amazon EBS Encryption</a> in the EBS documentation. For instance store–backed EC2 instances, you can enable encryption at rest by using <a href="https://aws.amazon.com/blogs/security/how-to-protect-data-at-rest-with-amazon-ec2-instance-store-encryption/" target="_blank" rel="noopener noreferrer">Amazon EC2 instance store encryption</a>.</p>
<h3>Encryption in transit</h3>
<p>Kafka uses TLS for client and internode communications.</p>
<h3>Authentication</h3>
<p>Authentication of connections to brokers from clients (producers and consumers) to other brokers and tools uses either Secure Sockets Layer (SSL) or Simple Authentication and Security Layer (SASL).</p>
<p>Kafka supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration.</p>
<h3>Authorization</h3>
<p>In Kafka, authorization is pluggable and integration with external authorization services is supported.</p>
<h2>Backup and restore</h2>
<p>The type of storage used in your deployment dictates your backup and restore strategy.</p>
<p>The best way to back up a Kafka cluster based on instance storage is to set up a second cluster and replicate messages using MirrorMaker. Kafka’s mirroring feature makes it possible to maintain a replica of an existing Kafka cluster. Depending on your setup and requirements, your backup cluster might be in the same AWS Region as your main cluster or in a different one.</p>
<p>For EBS-based deployments, you can enable <a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html" target="_blank" rel="noopener noreferrer">automatic snapshots of EBS volumes</a> to back up volumes. You can easily create new EBS volumes from these snapshots to restore. We recommend storing backup files in <a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer">Amazon S3</a>.</p>
<p>For more information on how to back up in Kafka, see <a href="https://kafka.apache.org/documentation/" target="_blank" rel="noopener noreferrer">the Kafka</a> documentation.</p>
<h2>Conclusion</h2>
<p>In this post, we discussed several patterns for running Kafka in the AWS Cloud. AWS also provides an alternative managed solution with <a href="https://aws.amazon.com/kinesis/data-streams/" target="_blank" rel="noopener noreferrer">Amazon Kinesis Data Streams</a>, there are <a href="https://aws.amazon.com/real-time-data-streaming-on-aws/" target="_blank" rel="noopener noreferrer">no servers to manage or scaling cliffs to worry about</a>, you can scale the size of your streaming pipeline in seconds without downtime, data replication across availability zones is automatic, you benefit from security out of the box, Kinesis Data Streams is tightly integrated with a wide variety of AWS services like Lambda, Redshift, Elasticsearch and it supports open source frameworks like Storm, Spark, Flink, and more. You may refer to <a href="https://github.com/awslabs/kinesis-kafka-connector" target="_blank" rel="noopener noreferrer">kafka-kinesis connector</a>.</p>
<p>If you have questions or suggestions, please comment below.</p>
<hr />
<p>&nbsp;</p>
<p><img class="alignnone size-full wp-image-4597" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/14/big_data_02.png" alt="" width="800" height="15" /></p>
<p>&nbsp;</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/implement-serverless-log-analytics-using-amazon-kinesis-analytics/" target="_blank" rel="noopener noreferrer">Implement Serverless Log Analytics Using Amazon Kinesis Analytics</a> and <a href="https://aws.amazon.com/blogs/big-data/real-time-clickstream-anomaly-detection-with-amazon-kinesis-analytics/" target="_blank" rel="noopener noreferrer">Real-time Clickstream Anomaly Detection with Amazon Kinesis Analytics</a>.</p>
<p><img class="alignnone size-thumbnail wp-image-1595" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2017/02/01/implement_serverless_1-150x150.gif" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Author</h3>
<p><img class="size-full wp-image-4504 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/28/Prasad.png" alt="" width="113" height="146" /></p>
<p><strong>Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. </strong> He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family<strong>.</strong></p>
<p>&nbsp;</p>
<p>&nbsp;</p>Best Practices for Running Apache Cassandra on Amazon EC2https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/
Wed, 28 Feb 2018 20:01:09 +0000fa2f83a02ae4ee8e5a3cb092f231afd5e412115cIn this post, we outline three Cassandra deployment options, as well as provide guidance about determining the best practices for your use case.<p><a href="http://cassandra.apache.org/" target="_blank" rel="noopener noreferrer">Apache Cassandra</a> is a commonly used, high performance NoSQL database. AWS customers that currently maintain Cassandra on-premises may want to take advantage of the scalability, reliability, security, and economic benefits of running Cassandra on <a href="https://aws.amazon.com/ec2/" target="_blank" rel="noopener noreferrer">Amazon EC2</a>.</p>
<p>Amazon EC2 and <a href="https://aws.amazon.com/ebs/" target="_blank" rel="noopener noreferrer">Amazon Elastic Block Store (Amazon EBS)</a> provide secure, resizable compute capacity and storage in the AWS Cloud. When combined, you can deploy Cassandra, allowing you to scale capacity according to your requirements. Given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.</p>
<p>In this post, we outline three Cassandra deployment options, as well as provide guidance about determining the best practices for your use case in the following areas:</p>
<ul>
<li>Cassandra resource overview</li>
<li>Deployment considerations</li>
<li>Storage options</li>
<li>Networking</li>
<li>High availability and resiliency</li>
<li>Maintenance</li>
<li>Security</li>
</ul>
<p><span id="more-4484"></span></p>
<h2>DynamoDB</h2>
<p>Before we jump into best practices for running Cassandra on AWS, we should mention that we have many customers who decided to use DynamoDB instead of managing their own Cassandra cluster. DynamoDB is fully managed, serverless, and provides multi-master cross-region replication, encryption at rest, and managed backup and restore. Integration with AWS Identity and Access Management (IAM) enables DynamoDB customers to implement fine-grained access control for their data security needs.</p>
<p>Several customers who have been using large Cassandra clusters for many years have moved to DynamoDB to eliminate the complications of administering Cassandra clusters and maintaining high availability and durability themselves. Gumgum.com is one customer who migrated to DynamoDB and observed significant savings.&nbsp;For more information, see <a href="http://techblog.gumgum.com/articles/moving-to-amazon-dynamodb-from-hosted-cassandra" target="_blank" rel="noopener noreferrer">Moving to Amazon DynamoDB from Hosted Cassandra: A Leap Towards 60% Cost Saving per Year</a>.</p>
<p>AWS provides options, so you’re covered whether you want to run your own NoSQL Cassandra database, or move to a fully managed, serverless DynamoDB database.</p>
<h2>Cassandra resource overview</h2>
<p>Here’s a short introduction to standard Cassandra resources and how they are implemented with AWS infrastructure. If you’re already familiar with Cassandra or AWS deployments, this can serve as a refresher.</p>
<table style="height: 1317px" border="1" width="1160" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="203"><span style="color: #ffffff"><strong>Resource</strong></span></td>
<td style="text-align: center" width="203"><span style="color: #ffffff"><strong>Cassandra</strong></span></td>
<td style="text-align: center" width="203"><span style="color: #ffffff"><strong>AWS</strong></span></td>
</tr>
<tr>
<td width="203">Cluster</td>
<td width="203"> <p>A single Cassandra deployment.</p> <p>&nbsp;</p> <p>This typically consists of multiple physical locations, keyspaces, and physical servers.</p></td>
<td width="203">A logical deployment construct in AWS that maps to an <a href="https://aws.amazon.com/cloudformation" target="_blank" rel="noopener noreferrer">AWS CloudFormation</a> StackSet, which consists of one or many CloudFormation stacks to deploy Cassandra.</td>
</tr>
<tr>
<td width="203">Datacenter</td>
<td width="203">A group of nodes configured as a single replication group.</td>
<td width="203"> <p>A logical deployment construct in AWS.</p> <p>&nbsp;</p> <p>A datacenter is deployed with a single CloudFormation stack consisting of Amazon EC2 instances, networking, storage, and security resources.</p></td>
</tr>
<tr>
<td width="203">Rack</td>
<td width="203"> <p>A collection of servers.</p> <p>&nbsp;</p> <p>A datacenter consists of at least one rack. Cassandra tries to place the replicas on different racks.</p></td>
<td width="203">A single Availability Zone.</td>
</tr>
<tr>
<td width="203">Server/node</td>
<td width="203">A physical virtual machine running Cassandra software.</td>
<td width="203">An EC2 instance.</td>
</tr>
<tr>
<td width="203">Token</td>
<td width="203">Conceptually, the data managed by a cluster is represented as a ring. The ring is then divided into ranges equal to the number of nodes. Each node being responsible for one or more ranges of the data. Each node gets assigned with a token, which is essentially a random number from the range. The token value determines the node’s position in the ring and its range of data.</td>
<td width="203">Managed within Cassandra.</td>
</tr>
<tr>
<td width="203">Virtual node (vnode)</td>
<td width="203">Responsible for storing a range of data. Each vnode receives one token in the ring. A cluster (by default) consists of 256 tokens, which are uniformly distributed across all servers in the Cassandra datacenter.</td>
<td width="203">Managed within Cassandra.</td>
</tr>
<tr>
<td width="203">Replication factor</td>
<td width="203">The total number of replicas across the cluster.</td>
<td width="203">Managed within Cassandra.</td>
</tr>
</tbody>
</table>
<h2>Deployment considerations</h2>
<p>One of the many benefits of deploying Cassandra on Amazon EC2 is that you can automate many deployment tasks. In addition, AWS includes services, such as CloudFormation, that allow you to describe and provision all your infrastructure resources in your cloud environment.</p>
<p>We recommend orchestrating each Cassandra ring with one CloudFormation template. If you are deploying in multiple AWS Regions, you can use a CloudFormation StackSet to manage those stacks. All the maintenance actions (scaling, upgrading, and backing up) should be scripted with an AWS SDK. These may live as standalone <a href="https://aws.amazon.com/lambda" target="_blank" rel="noopener noreferrer">AWS Lambda</a> functions that can be invoked on demand during maintenance.</p>
<h3>Deployment patterns</h3>
<p>In this section, we discuss various deployment options available for Cassandra in Amazon EC2. A successful deployment starts with thoughtful consideration of these options. Consider the amount of data, network environment, throughput, and availability.</p>
<ul>
<li>Single AWS Region, 3 Availability Zones</li>
<li>Active-active, multi-Region</li>
<li>Active-standby, multi-Region</li>
</ul>
<h3>Single region, 3 Availability Zones</h3>
<p>In this pattern, you deploy the Cassandra cluster in one AWS Region and three Availability Zones. There is only one ring in the cluster. By using EC2 instances in three zones, you ensure that the replicas are distributed uniformly in all zones.<img class="wp-image-4494 size-full aligncenter" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/27/Cassandra1-1.png" alt="" width="550" height="388" /></p>
<p>To ensure the even distribution of data across all Availability Zones, we recommend that you distribute the EC2 instances evenly in all three Availability Zones. The number of EC2 instances in the cluster is a multiple of three (the replication factor).</p>
<p>This pattern is suitable in situations where the application is deployed in one Region or where deployments in different Regions should be constrained to the same Region because of data privacy or other legal requirements.</p>
<table style="height: 180px" border="1" width="1191" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="312"><span style="color: #ffffff"><strong>Pros</strong></span></td>
<td style="text-align: center" width="312"><span style="color: #ffffff"><strong>Cons</strong></span></td>
</tr>
<tr>
<td width="312"> <p>●&nbsp;&nbsp;&nbsp;&nbsp; Highly available, can sustain failure of one Availability Zone.</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; Simple deployment</p></td>
<td width="312">●&nbsp;&nbsp;&nbsp;&nbsp; Does not protect in a situation when many of the resources in a Region are experiencing intermittent failure.</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<h3>Active-active, multi-Region</h3>
<p>In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are <a href="http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html" target="_blank" rel="noopener noreferrer">peered</a> so that data can be replicated between two rings.<img class="size-full wp-image-4488 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/27/Cassandra2.png" alt="" width="800" height="296" /></p>
<p>We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.</p>
<p>This pattern is most suitable when the applications using the Cassandra cluster are deployed in more than one Region.</p>
<table style="height: 203px" border="1" width="1180" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="312"><span style="color: #ffffff"><strong>Pros</strong></span></td>
<td style="text-align: center" width="312"><span style="color: #ffffff"><strong>Cons</strong></span></td>
</tr>
<tr>
<td width="312"> <p>●&nbsp;&nbsp;&nbsp;&nbsp; No data loss during failover.</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; Highly available, can sustain when many of the resources in a Region are experiencing intermittent failures.</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; Read/write traffic can be localized to the closest Region for the user for lower latency and higher performance.</p></td>
<td width="312"> <p>●&nbsp;&nbsp;&nbsp;&nbsp; High operational overhead</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; The second Region effectively doubles the cost</p></td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<h3>Active-standby, multi-region</h3>
<p>In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.</p>
<h2><img class="size-full wp-image-4489 aligncenter" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/27/Cassandra3.png" alt="" width="800" height="297" /></h2>
<p>However, the second Region does not receive traffic from the applications. It only functions as a secondary location for disaster recovery reasons. If the primary Region is not available, the second Region receives traffic.</p>
<p>We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.</p>
<p>This pattern is most suitable when the applications using the Cassandra cluster require low recovery point objective (RPO) and recovery time objective (RTO).</p>
<table style="height: 164px" border="1" width="1103" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="312"><span style="color: #ffffff"><strong>Pros</strong></span></td>
<td style="text-align: center" width="312"><span style="color: #ffffff"><strong>Cons</strong></span></td>
</tr>
<tr>
<td width="312"> <p>●&nbsp;&nbsp;&nbsp;&nbsp; No data loss during failover.</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; Highly available, can sustain failure or partitioning of one whole Region.</p></td>
<td width="312"> <p>●&nbsp;&nbsp;&nbsp;&nbsp; High operational overhead.</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; High latency for writes for eventual consistency.</p> <p>●&nbsp;&nbsp;&nbsp;&nbsp; The second Region effectively doubles the cost.</p></td>
</tr>
</tbody>
</table>
<h2>Storage options</h2>
<p>In on-premises deployments, Cassandra deployments use local disks to store data. There are two storage options for EC2 instances:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html" target="_blank" rel="noopener noreferrer">Ephemeral storage (instance store)</a></li>
<li><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html" target="_blank" rel="noopener noreferrer">Amazon EBS</a></li>
</ul>
<p>Your choice of storage is closely related to the type of workload supported by the Cassandra cluster. Instance store works best for most general purpose Cassandra deployments. However, in certain read-heavy clusters, Amazon EBS is a better choice.</p>
<p>The choice of <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance type</a> is generally driven by the type of storage:</p>
<ul>
<li>If ephemeral storage is required for your application, a storage-optimized (I3) instance is the best option.</li>
<li>If your workload requires Amazon EBS, it is best to go with compute-optimized (C5) instances.</li>
<li>Burstable instance types (T2) don’t offer good performance for Cassandra deployments.</li>
</ul>
<h3><strong>Instance store</strong></h3>
<p>Ephemeral storage is local to the EC2 instance. It may provide high input/output operations per second (IOPs) based on the instance type. An SSD-based instance store can support up to 3.3M IOPS in I3 instances. This high performance makes it an ideal choice for transactional or write-intensive applications such as Cassandra.</p>
<p>In general, instance storage is recommended for transactional, large, and medium-size Cassandra clusters. For a large cluster, read/write traffic is distributed across a higher number of nodes, so the loss of one node has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important.</p>
<p>As an example, for a cluster with 100 nodes, the loss of 1 node is 3.33% loss (with a replication factor of 3). Similarly, for a cluster with 10 nodes, the loss of 1 node is 33% less capacity (with a replication factor of 3).</p>
<table style="height: 836px" border="1" width="1131" cellpadding="10">
<tbody>
<tr style="background-color: #000000">
<td style="text-align: center" width="156"><span style="color: #ffffff"><strong>&nbsp;</strong></span></td>
<td style="text-align: center" width="126"><span style="color: #ffffff"><strong>Ephemeral storage</strong></span></td>
<td style="text-align: center" width="132"><span style="color: #ffffff"><strong>Amazon EBS</strong></span></td>
<td style="text-align: center" width="183"><span style="color: #ffffff"><strong>Comments</strong></span></td>
</tr>
<tr>
<td width="156"> <p>IOPS</p> <p>(translates to higher query performance)</p></td>
<td width="126">Up to 3.3M on I3</td>
<td width="132"> <p>80K/instance</p> <p>10K/gp2/volume</p> <p>32K/io1/volume</p></td>
<td width="183"> <p>This results in a higher query performance on each host. However, Cassandra implicitly scales well in terms of horizontal scale. In general, we recommend scaling horizontally first. Then, scale vertically to mitigate specific issues.</p> <p>&nbsp;</p> <p>Note: 3.3M IOPS is observed with 100% random read with a 4-KB block size on Amazon Linux.</p></td>
</tr>
<tr>
<td width="156">AWS instance types</td>
<td width="126">I3</td>
<td width="132">Compute optimized, C5</td>
<td width="183">Being able to choose between different instance types is an advantage in terms of CPU, memory, etc., for horizontal and vertical scaling.</td>
</tr>
<tr>
<td width="156">Backup/ recovery</td>
<td width="126">Custom</td>
<td width="132">Basic building blocks are available from AWS.</td>
<td width="183"> <p>Amazon EBS offers distinct advantage here. It is small engineering effort to establish a backup/restore strategy.</p> <p>a) In case of an instance failure, the EBS volumes from the failing instance are attached to a new instance.</p> <p>b) In case of an EBS volume failure, the data is restored by creating a new EBS volume from last snapshot.</p></td>
</tr>
</tbody>
</table>
<h3><strong>Amazon EBS</strong></h3>
<p>EBS volumes offer higher resiliency, and IOPs can be configured based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. EBS volumes can support up to 32K IOPS per volume and up to 80K IOPS per instance in RAID configuration. They have an annualized failure rate (AFR) of 0.1–0.2%, which makes EBS volumes 20 times more reliable than typical commodity disk drives.</p>
<p>The primary advantage of using Amazon EBS in a Cassandra deployment is that it reduces data-transfer traffic significantly when a node fails or must be replaced. The replacement node joins the cluster much faster. However, Amazon EBS could be more expensive, depending on your data storage needs.</p>
<p>Cassandra has built-in fault tolerance by replicating data to partitions across a configurable number of nodes. It can not only withstand node failures but if a node fails, it can also recover by copying data from other replicas into a new node. Depending on your application, this could mean copying tens of gigabytes of data. This adds additional delay to the recovery process, increases network traffic, and could possibly impact the performance of the Cassandra cluster during recovery.</p>
<p>Data stored on Amazon EBS is persisted in case of an instance failure or termination. The node’s data stored on an EBS volume remains intact and the EBS volume can be mounted to a new EC2 instance. Most of the replicated data for the replacement node is already available in the EBS volume and won’t need to be copied over the network from another node. Only the changes made after the original node failed need to be transferred across the network. That makes this process much faster.</p>
<p>EBS volumes are snapshotted periodically. So, if a volume fails, a new volume can be created from the last known good snapshot and be attached to a new instance. This is faster than creating a new volume and coping all the data to it.</p>
<p>Most Cassandra deployments use a replication factor of three. However, Amazon EBS does its own replication under the covers for fault tolerance. In practice, EBS volumes are about 20 times more reliable than typical disk drives. So, it is possible to go with a replication factor of two. This not only saves cost, but also enables deployments in a region that has two Availability Zones.</p>
<p>EBS volumes are recommended in case of read-heavy, small clusters (fewer nodes) that require storage of a large amount of data. Keep in mind that the Amazon EBS provisioned IOPS could get expensive. General purpose <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html" target="_blank" rel="noopener noreferrer">EBS volumes</a> work best when sized for required performance.</p>
<h2>Networking</h2>
<p>If your cluster is expected to receive high read/write traffic, select an <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance type</a> that offers 10–Gb/s performance. As an example, i3.8xlarge and c5.9xlarge both offer 10–Gb/s networking performance. A smaller instance type in the same family leads to a relatively lower networking throughput.</p>
<p>Cassandra generates a universal unique identifier (UUID) for each node based on IP address for the instance. This UUID is used for distributing vnodes on the ring.</p>
<p>In the case of an AWS deployment, IP addresses are assigned automatically to the instance when an EC2 instance is created. With the new IP address, the data distribution changes and the whole ring has to be rebalanced. This is not desirable.</p>
<p>To preserve the assigned IP address, use a <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#scenarios-enis" target="_blank" rel="noopener noreferrer">secondary elastic network interface</a> with a fixed IP address. Before swapping an EC2 instance with a new one, detach the secondary network interface from the old instance and attach it to the new one. This way, the UUID remains same and there is no change in the way that data is distributed in the cluster.</p>
<p>If you are deploying in more than one region, you can connect the two VPCs in two regions using <a href="https://aws.amazon.com/answers/networking/aws-multiple-region-multi-vpc-connectivity/" target="_blank" rel="noopener noreferrer">cross-region VPC peering</a>.</p>
<h2>High availability and resiliency</h2>
<p>Cassandra is designed to be fault-tolerant and highly available during multiple node failures. In the patterns described earlier in this post, you deploy Cassandra to three Availability Zones with a replication factor of three. Even though it limits the AWS Region choices to the Regions with three or more Availability Zones, it offers protection for the cases of one-zone failure and network partitioning within a single Region. The multi-Region deployments described earlier in this post protect when many of the resources in a Region are experiencing intermittent failure.</p>
<p>Resiliency is ensured through infrastructure automation. The deployment patterns all require a quick replacement of the failing nodes. In the case of a regionwide failure, when you deploy with the multi-Region option, traffic can be directed to the other active Region while the infrastructure is recovering in the failing Region. In the case of unforeseen data corruption, the standby cluster can be restored with point-in-time backups stored in <a href="https://aws.amazon.com/s3" target="_blank" rel="noopener noreferrer">Amazon S3</a>.</p>
<h2>Maintenance</h2>
<p>In this section, we look at ways to ensure that your Cassandra cluster is healthy:</p>
<ul>
<li>Scaling</li>
<li>Upgrades</li>
<li>Backup and restore</li>
</ul>
<h3>Scaling</h3>
<p>Cassandra is horizontally scaled by adding more instances to the ring. We recommend doubling the number of nodes in a cluster to scale up in one scale operation. This leaves the data homogeneously distributed across Availability Zones. Similarly, when scaling down, it’s best to halve the number of instances to keep the data homogeneously distributed.</p>
<p>Cassandra is vertically scaled by increasing the compute power of each node. Larger instance types have proportionally bigger memory. Use deployment automation to swap instances for bigger instances without downtime or data loss.</p>
<h3>Upgrades</h3>
<p>All three types of upgrades (Cassandra, operating system patching, and instance type changes) follow the same rolling upgrade pattern.</p>
<p>In this process, you start with a new EC2 instance and install software and patches on it. Thereafter, remove one node from the ring. For more information, see <a href="http://www.doc.ic.ac.uk/~pg1712/blog/cassandra-cluster-rolling-upgrade/" target="_blank" rel="noopener noreferrer">Cassandra cluster Rolling upgrade</a>. Then, you detach the secondary network interface from one of the EC2 instances in the ring and attach it to the new EC2 instance. Restart the Cassandra service and wait for it to sync. Repeat this process for all nodes in the cluster.</p>
<h3>Backup and restore</h3>
<p>Your backup and restore strategy is dependent on the type of storage used in the deployment. Cassandra supports snapshots and incremental backups. When using instance store, a file-based backup tool works best. Customers often use rsync or other third-party products to copy data backups from the instance to long-term storage. This process has to be repeated for all instances in the cluster for a complete backup. These backup files are copied back to new instances to restore. We recommend using S3 to durably store backup files for long-term storage.</p>
<p>For Amazon EBS based deployments, you can enable <a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html" target="_blank" rel="noopener noreferrer">automated snapshots of EBS volumes</a> to back up volumes. New EBS volumes can be easily created from these snapshots for restoration.</p>
<h2>Security</h2>
<p>We recommend that you think about security in all aspects of deployment. The first step is to ensure that the data is encrypted at rest and in transit. The second step is to restrict access to unauthorized users. For more information about security, see the <a href="http://cassandra.apache.org/doc/latest/operating/security.html" target="_blank" rel="noopener noreferrer">Cassandra documentation</a>.</p>
<h3>Encryption at rest</h3>
<p>Encryption at rest can be achieved by using EBS volumes with encryption enabled. Amazon EBS uses <a href="https://aws.amazon.com/kms" target="_blank" rel="noopener noreferrer">AWS KMS</a> for encryption. For more information, see <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html" target="_blank" rel="noopener noreferrer">Amazon EBS Encryption</a>.</p>
<p>Instance store–based deployments require using an encrypted file system or an AWS partner solution.</p>
<h3>Encryption in transit</h3>
<p>Cassandra uses Transport Layer Security (TLS) for client and internode communications.</p>
<h3>Authentication</h3>
<p>The security mechanism is pluggable, which means that you can easily swap out one authentication method for another. You can also provide your own method of authenticating to Cassandra, such as a Kerberos ticket, or if you want to store passwords in a different location, such as an LDAP directory.</p>
<h3>Authorization</h3>
<p>The authorizer that’s plugged in by default is org.apache.cassandra.auth.Allow AllAuthorizer. Cassandra also provides a role-based access control (RBAC) capability, which allows you to create roles and assign permissions to these roles.</p>
<h2>Conclusion</h2>
<p>In this post, we discussed several patterns for running Cassandra in the AWS Cloud. This post describes how you can manage Cassandra databases running on Amazon EC2. AWS also provides managed offerings for a number of databases. To learn more, see <a href="https://aws.amazon.com/products/databases/" target="_blank" rel="noopener noreferrer">Purpose-built databases for all your application needs</a>.</p>
<p>If you have questions or suggestions, please comment below.</p>
<hr />
<p>&nbsp;</p>
<p><img class="alignnone size-full wp-image-4604" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/14/big_data_04.png" alt="" width="800" height="16" /></p>
<p>&nbsp;</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/analyze-your-data-on-amazon-dynamodb-with-apache-spark/" target="_blank" rel="noopener noreferrer">Analyze Your Data on Amazon DynamoDB with Apache Spark</a> and <a href="https://aws.amazon.com/blogs/big-data/analysis-of-top-n-dynamodb-objects-using-amazon-athena-and-amazon-quicksight/" target="_blank" rel="noopener noreferrer">Analysis of Top-N DynamoDB Objects using Amazon Athena and Amazon QuickSight</a>.</p>
<p><img class="alignnone wp-image-2529 size-thumbnail" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2017/06/30/top-n_1-150x150.gif" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Authors</h3>
<p><img class="size-full wp-image-4504 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/28/Prasad.png" alt="" width="113" height="146" /></p>
<p><strong>Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. </strong> He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family<strong>.</strong></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><strong><img class="size-full wp-image-4503 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/28/Provanshu.png" alt="" width="113" height="153" />Provanshu Dey is a Senior IoT Consultant with AWS Professional Services. </strong>He works on highly scalable and reliable IoT, data and machine learning solutions with our customers. In his spare time, he enjoys spending time with his family and tinkering with electronics &amp; gadgets.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>Amazon Redshift – 2017 Recaphttps://aws.amazon.com/blogs/big-data/amazon-redshift-2017-recap/
Fri, 23 Feb 2018 17:24:07 +00004adfccaa976160a9a86758e1e897a59c2dbaaeaeWe have been busy adding new features and capabilities to Amazon Redshift, and we wanted to give you a glimpse of what we’ve been doing over the past year. In this article, we recap a few of our enhancements and provide a set of resources that you can use to learn more and get the most out of your Amazon Redshift implementation.<p><img class="size-full wp-image-4412 alignright" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/19/fast-at-scale.png" alt="" width="500" height="261" />We have been busy adding new features and capabilities to <a href="https://aws.amazon.com/redshift/">Amazon Redshift</a>, and we wanted to give you a glimpse of what we’ve been doing over the past year. In this article, we recap a few of our enhancements and provide a set of resources that you can use to learn more and get the most out of your Amazon Redshift implementation.</p>
<p>In 2017, we made more than 30 announcements about Amazon Redshift. We listened to you, our customers, and delivered Redshift Spectrum, a feature of Amazon Redshift, that gives you the ability to extend analytics to your data lake—without moving data. We launched new DC2 nodes, doubling performance at the same price. We also announced many new features that provide greater scalability, better performance, more automation, and easier ways to manage your analytics workloads.</p>
<p>To see a full list of our launches, visit our <a href="https://aws.amazon.com/redshift/whats-new/" target="_blank" rel="noopener noreferrer">what’s new</a> page—and be sure to subscribe to our RSS feed.<span id="more-4384"></span></p>
<h2>Major launches in 2017</h2>
<p><strong>Amazon Redshift Spectrum</strong>—<strong>extend analytics to your data lake, without moving data</strong></p>
<p>We launched <a href="https://aws.amazon.com/about-aws/whats-new/2017/04/introducing-amazon-redshift-spectrum-run-amazon-redshift-queries-directly-on-datasets-as-large-as-an-exabyte-in-amazon-s3/" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum</a> to give you the freedom to store data in <a href="https://aws.amazon.com/s3/">Amazon S3</a>, in open file formats, and have it available for analytics without the need to load it into your Amazon Redshift cluster. It enables you to easily join datasets across Redshift clusters and S3 to provide unique insights that you would not be able to obtain by querying independent data silos.</p>
<p>With Redshift Spectrum, you can run SQL queries against data in an Amazon S3 data lake as easily as you analyze data stored in Amazon Redshift. And you can do it without loading data or resizing the Amazon Redshift cluster based on growing data volumes. Redshift Spectrum separates compute and storage to meet workload demands for data size, concurrency, and performance. Redshift Spectrum scales processing across thousands of nodes, so results are fast, even with massive datasets and complex queries. You can query open file formats that you already use—such as Apache Avro, CSV, Grok, ORC, Apache Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV—directly in Amazon S3, without any data movement.</p>
<blockquote>
<p>“<em>For complex queries, Redshift Spectrum provided a 67 percent performance gain</em>,” <a href="https://aws.amazon.com/blogs/big-data/using-amazon-redshift-spectrum-amazon-athena-and-aws-glue-with-node-js-in-production/" target="_blank" rel="noopener noreferrer">said Rafi Ton, CEO, NUVIAD</a>. “<em>Using the Parquet data format, Redshift Spectrum delivered an 80 percent performance improvement. For us, this was substantial.</em>”</p>
</blockquote>
<p>To learn more about Redshift Spectrum, watch our AWS Summit session <a href="https://www.youtube.com/watch?v=gchd2sDhSuY" target="_blank" rel="noopener noreferrer">Intro to Amazon Redshift Spectrum: Now Query Exabytes of Data in S3</a>, and read our announcement blog post <a href="https://aws.amazon.com/blogs/aws/amazon-redshift-spectrum-exabyte-scale-in-place-queries-of-s3-data/" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum – Exabyte-Scale In-Place Queries of S3 Data</a>.</p>
<p><strong>DC2 nodes—twice the performance of DC1 at the same price</strong></p>
<p>We launched second-generation <a href="https://aws.amazon.com/about-aws/whats-new/2017/10/amazon-redshift-announces-dense-compute-dc2-nodes-with-twice-the-performance-as-dc1-at-the-same-price/" target="_blank" rel="noopener noreferrer">Dense Compute (DC2) nodes</a> to provide low latency and high throughput for demanding data warehousing workloads. DC2 nodes feature powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks (SSDs). We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 <a href="https://aws.amazon.com/redshift/pricing/" target="_blank" rel="noopener noreferrer">at the same price</a>. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.</p>
<blockquote>
<p>“<em>Redshift allows us to quickly spin up clusters and provide our data scientists with a fast and easy method to access data and generate insights</em>,” said Bradley Todd, technology architect at Liberty Mutual. “<em>We saw a 9x reduction in month-end reporting time with Redshift DC2 nodes as compared to DC1</em>.”</p>
</blockquote>
<p>Read our <a href="https://aws.amazon.com/redshift/customer-success/" target="_blank" rel="noopener noreferrer">customer testimonials</a> to see the performance gains our customers are experiencing with DC2 nodes. To learn more, read our blog post <a href="https://aws.amazon.com/blogs/big-data/amazon-redshift-dense-compute-dc2-nodes-deliver-twice-the-performance-as-dc1-at-the-same-price/" target="_blank" rel="noopener noreferrer">Amazon Redshift Dense Compute (DC2) Nodes Deliver Twice the Performance as DC1 at the Same Price</a>.</p>
<p><strong>Performance enhancements</strong>—&nbsp;<strong>3x-5x faster queries</strong></p>
<p>On average, our customers are seeing 3x to 5x performance gains for most of their critical workloads.</p>
<p>We introduced <a href="https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-redshift-uses-machine-learning-to-accelerate-dashboards-and-interactive-analysis/" target="_blank" rel="noopener noreferrer">short query acceleration</a> to speed up execution of queries such as reports, dashboards, and interactive analysis. Short query acceleration uses machine learning to predict the execution time of a query, and to move short running queries to an express <em>short query</em> queue for faster processing.</p>
<p>We launched <a href="https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-redshift-introduces-result-caching-for-sub-second-response-for-repeat-queries/" target="_blank" rel="noopener noreferrer">results caching</a> to deliver sub-second response times for queries that are repeated, such as dashboards, visualizations, and those from BI tools. Results caching has an added benefit of freeing up resources to improve the performance of all other queries.</p>
<p>We also introduced <a href="https://aws.amazon.com/about-aws/whats-new/2017/12/amazon-redshift-introduces-late-materialization-for-faster-query-processing/" target="_blank" rel="noopener noreferrer">late materialization</a> to reduce the amount of data scanned for queries with predicate filters by batching and factoring in the filtering of predicates before fetching data blocks in the next column. For example, if only 10 percent of the table rows satisfy the predicate filters, Amazon Redshift can potentially save 90 percent of the I/O for the remaining columns to improve query performance.</p>
<p>We launched <a href="https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html" target="_blank" rel="noopener noreferrer">query monitoring rules</a> and pre-defined rule templates. These features make it easier for you to set metrics-based performance boundaries for workload management (WLM) queries, and specify what action to take when a query goes beyond those boundaries. For example, for a queue that’s dedicated to short-running queries, you might create a rule that aborts queries that run for more than 60 seconds. To track poorly designed queries, you might have another rule that logs queries that contain nested loops.</p>
<h2>Customer insights</h2>
<p>Amazon Redshift and Redshift Spectrum serve customers across a variety of industries and sizes, from startups to large enterprises. Visit our <a href="https://aws.amazon.com/redshift/customer-success/" target="_blank" rel="noopener noreferrer">customer</a> page to see the success that customers are having with our recent enhancements. Learn how companies like Liberty Mutual Insurance saw a 9x reduction in month-end reporting time using DC2 nodes. On this page, you can find case studies, videos, and other content that show how our customers are using Amazon Redshift to drive innovation and business results.</p>
<p>In addition, check out these resources to learn about the success our customers are having building out a data warehouse and data lake integration solution with Amazon Redshift:</p>
<ul>
<li>Sysco: <a href="https://www.youtube.com/watch?v=nlmL8c975B0" target="_blank" rel="noopener noreferrer">Developing an Insights Platform – Sysco’s Journey from Disparate Systems to a Data Lake and Beyond</a> (re:Invent session recording)</li>
<li>21<sup>st</sup> Century Fox: <a href="https://www.youtube.com/watch?v=3Xg3yu5xnMY" target="_blank" rel="noopener noreferrer">Migrating Your Traditional Data Warehouse to a Modern Data Lake</a> (re:Invent session recording)</li>
<li>Cerberus Technologies: <a href="https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/" target="_blank" rel="noopener noreferrer">How I built a data warehouse using Amazon Redshift and AWS services in record time</a> (blog post)</li>
<li>NUVIAD: <a href="https://aws.amazon.com/blogs/big-data/using-amazon-redshift-spectrum-amazon-athena-and-aws-glue-with-node-js-in-production/" target="_blank" rel="noopener noreferrer">Using Amazon Redshift Spectrum, Amazon Athena, and AWS Glue with Node.js in Production</a> (blog post)</li>
<li>Periscope Data: <a href="https://www.youtube.com/watch?v=AUgi8PvY5FY&amp;t=51s&amp;list=PLhr1KZpdzukdeX8mQ2qO73bg6UKQHYsHb&amp;index=1" target="_blank" rel="noopener noreferrer">Making Every Redshift Query Valuable with Periscope Data</a> (<em>This is My Architecture</em> episode)</li>
<li><a href="https://aws.amazon.com/solutions/case-studies/lyft/" target="_blank" rel="noopener noreferrer">Lyft Case Study</a></li>
<li><a href="https://aws.amazon.com/solutions/case-studies/boingo-wireless/" target="_blank" rel="noopener noreferrer">Boingo Wireless Case Study</a></li>
</ul>
<h2>Partner solutions</h2>
<p>You can enhance your Amazon Redshift data warehouse by working with industry-leading experts. Our AWS Partner Network (APN) Partners have certified their solutions to work with Amazon Redshift. They offer software, tools, integration, and consulting services to help you at every step. Visit our <a href="https://aws.amazon.com/redshift/partners/" target="_blank" rel="noopener noreferrer">Amazon Redshift Partner</a> page and choose an APN Partner. Or, use <a href="https://aws.amazon.com/marketplace/" target="_blank" rel="noopener noreferrer">AWS Marketplace</a> to find and immediately start using third-party software.</p>
<p>To see what our Partners are saying about Amazon Redshift Spectrum and our DC2 nodes mentioned earlier, read these blog posts:</p>
<ul>
<li>Looker: <a href="https://discourse.looker.com/t/using-amazon-redshift-s-new-spectrum-feature/4975" target="_blank" rel="noopener noreferrer">Using Amazon Redshift’s new Spectrum Feature</a></li>
<li>Matillion: <a href="https://www.matillion.com/events/data-lake-amazon-redshift-spectrum/" target="_blank" rel="noopener noreferrer">Accessing your Data Lake Assets from Amazon Redshift Spectrum</a></li>
<li>Periscope Data: <a href="https://www.periscopedata.com/blog/amazon-redshifts-hardware-upgrade-improves-query-speed-by-up-to-5x" target="_blank" rel="noopener noreferrer">Amazon Redshift’s Hardware Upgrade Improves Query Speed by up to 5x</a></li>
<li>Reflect: <a href="https://reflect.io/blog/redshift-spectrum/" target="_blank" rel="noopener noreferrer">The Implications of Redshift Spectrum</a></li>
<li>SnapLogic: <a href="https://www.snaplogic.com/blog/integrate-through-the-big-data-insights-gap" target="_blank" rel="noopener noreferrer">Integrate through the big data insights gap</a></li>
<li>Tableau: <a href="https://aws.amazon.com/blogs/big-data/tableau-10-4-supports-amazon-redshift-spectrum-with-external-amazon-s3-tables/" target="_blank" rel="noopener noreferrer">Tableau 10.4 Supports Amazon Redshift Spectrum with External Amazon S3 Tables</a></li>
</ul>
<h2>Resources</h2>
<p><strong>Blog posts</strong></p>
<p>Visit the <a href="https://aws.amazon.com/blogs/big-data/tag/amazon-redshift/" target="_blank" rel="noopener noreferrer">AWS Big Data Blog</a> for a list of all Amazon Redshift articles.</p>
<ul>
<li><a href="https://aws.amazon.com/blogs/big-data/amazon-redshift-spectrum-extends-data-warehousing-out-to-exabytes-no-loading-required/" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum Extends Data Warehousing Out to Exabytes—No Loading Required</a></li>
<li><a href="https://aws.amazon.com/blogs/big-data/10-best-practices-for-amazon-redshift-spectrum/" target="_blank" rel="noopener noreferrer">10 Best Practices for Amazon Redshift Spectrum</a></li>
<li><a href="https://aws.amazon.com/blogs/big-data/top-8-best-practices-for-high-performance-etl-processing-using-amazon-redshift/" target="_blank" rel="noopener noreferrer">Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift</a></li>
<li><a href="https://aws.amazon.com/blogs/big-data/analyze-database-audit-logs-for-security-and-compliance-using-amazon-redshift-spectrum/" target="_blank" rel="noopener noreferrer">Analyze Database Audit Logs for Security and Compliance Using Amazon Redshift Spectrum</a></li>
<li><a href="https://aws.amazon.com/blogs/big-data/from-data-lake-to-data-warehouse-enhancing-customer-360-with-amazon-redshift-spectrum/" target="_blank" rel="noopener noreferrer">From Data Lake to Data Warehouse: Enhancing Customer 360 with Amazon Redshift Spectrum</a></li>
</ul>
<p><strong>YouTube videos</strong></p>
<ul>
<li>re:Invent session recording: <a href="https://www.youtube.com/watch?v=Q_K3qH5OYaM" target="_blank" rel="noopener noreferrer">Best Practices for Data Warehousing with Amazon Redshift</a></li>
<li>AWS Online Tech Talk: <a href="https://www.youtube.com/watch?v=bwM4pj57mC0" target="_blank" rel="noopener noreferrer">Analyze your Data Lake, Fast @ Any Scale</a></li>
<li>AWS Online Tech Talk: <a href="https://www.youtube.com/watch?v=PzK8ha_lRfY" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum: Quickly Query Exabytes of Data in S3</a></li>
</ul>
<p><strong>GitHub</strong></p>
<p>Our community of experts contribute on GitHub to provide tips and hints that can help you get the most out of your deployment. Visit <a href="https://github.com/search?q=org%3Aawslabs+redshift" target="_blank" rel="noopener noreferrer">GitHub</a> frequently to get the latest technical guidance, <a href="https://github.com/search?q=org%3Aawslabs+redshift&amp;type=Code" target="_blank" rel="noopener noreferrer">code samples</a>, <a href="https://github.com/awslabs/amazon-redshift-utils/tree/master/src/RedshiftAutomation" target="_blank" rel="noopener noreferrer">administrative task automation</a> utilities, the <a href="https://github.com/awslabs/amazon-redshift-utils/tree/master/src/AnalyzeVacuumUtility" target="_blank" rel="noopener noreferrer">analyze &amp; vacuum schema utility</a>, and more.</p>
<h2>Customer support</h2>
<p>If you are evaluating or considering a proof of concept with Amazon Redshift, or you need assistance migrating your on-premises or other cloud-based data warehouse to Amazon Redshift, our team of product experts and solutions architects can help you with architecting, sizing, and optimizing your data warehouse. Contact us using this <a href="https://pages.awscloud.com/redshift-proof-of-concept-request.html" target="_blank" rel="noopener noreferrer">support request form</a>, and let us know how we can assist you.</p>
<p>If you are an Amazon Redshift customer, we offer a no-cost health check program. Our team of database engineers and solutions architects give you recommendations for optimizing Amazon Redshift and Amazon Redshift Spectrum for your specific workloads. To learn more, email us at <a href="mailto:redshift-feedback@amazon.com" target="_blank" rel="noopener noreferrer">redshift-feedback@amazon.com</a>.</p>
<p>If you have any questions, email us at <a href="mailto:redshift-feedback@amazon.com" target="_blank" rel="noopener noreferrer">redshift-feedback@amazon.com</a>.</p>
<p>&nbsp;</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/aws/amazon-redshift-spectrum-exabyte-scale-in-place-queries-of-s3-data/" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum – Exabyte-Scale In-Place Queries of S3 Data</a>, <a href="https://aws.amazon.com/blogs/database/using-amazon-redshift-for-fast-analytical-reports/" target="_blank" rel="noopener noreferrer">Using Amazon Redshift for Fast Analytical Reports</a> and <a href="https://aws.amazon.com/blogs/database/how-to-migrate-your-oracle-data-warehouse-to-amazon-redshift-using-aws-sct-and-aws-dms/" target="_blank" rel="noopener noreferrer">How to Migrate Your Oracle Data Warehouse to Amazon Redshift Using AWS SCT and AWS DMS.</a></p>
<p><img class="alignnone wp-image-4356 size-thumbnail" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/vpc-150x150.png" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Author</h3>
<p><img class="size-full wp-image-4354 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/19/Larry.jpg" alt="" width="113" height="150" /></p>
<p><strong>Larry Heathcote is a Principle Product Marketing Manager at Amazon Web Services for data warehousing and analytics.</strong> Larry is passionate about seeing the results of data-driven insights on business outcomes. He enjoys family time, home projects, grilling out and the taste of classic barbeque.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>How Realtor.com Monitors Amazon Athena Usage with AWS CloudTrail and Amazon QuickSighthttps://aws.amazon.com/blogs/big-data/analyzing-amazon-athena-usage-by-teams-within-a-real-estate-company/
Tue, 20 Feb 2018 18:29:55 +000018ed85c699317232b29b8ceee77de181f2289964In this post, I discuss how to build a solution for monitoring Athena usage. To build this solution, you rely on AWS CloudTrail. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an S3 bucket.<p><em>This is a customer post by Ajay Rathod, a Staff Data Engineer at Realtor.com. </em></p>
<p>Realtor.com, in their own words: <em>Realtor.com</em><em><sup>&reg;</sup>, operated by Move, Inc., is a trusted resource for home buyers, sellers, and dreamers. It offers the most comprehensive database of for-sale properties, among competing national sites, and the information, tools, and professional expertise to help people move confidently through every step of their home journey.</em></p>
<p>Move, Inc. processes hundreds of terabytes of data partitioned by day and hour. Various teams run hundreds of queries on this data. Using AWS services, Move, Inc. has built an infrastructure for gathering and analyzing data:</p>
<ul>
<li>The data is obtained from various sources.</li>
<li>The data is then loaded into an <a href="https://aws.amazon.com/s3">Amazon S3</a> data lake with <a href="https://aws.amazon.com/kinesis/">Amazon Kinesis</a> and <a href="https://aws.amazon.com/datapipeline/">AWS Data Pipeline</a>.</li>
<li>To increase the effectiveness of the storage and subsequent querying, the data is converted into a Parquet format, and stored again in S3.</li>
<li><a href="https://aws.amazon.com/athena/">Amazon Athena</a> is used as the SQL (Structured Query Language) engine to query the data in S3. Athena is easy to use and is often quickly adopted by various teams.</li>
<li>Teams visualize query results in <a href="https://quicksight.aws/">Amazon QuickSight</a>. Amazon QuickSight is a business analytics service that allows you to quickly and easily visualize data and collaborate with other users in your account.</li>
<li>Data access is controlled by <a href="https://aws.amazon.com/iam/">AWS Identity and Access Management</a> (IAM) roles.</li>
</ul>
<p><span id="more-4399"></span></p>
<p>This architecture is known as the data platform and is shared by the data science, data engineering, and the data operations teams within the organization. Move, Inc. also enables other cross-functional teams to use Athena. When many users use Athena, it helps to monitor its usage to ensure cost-effectiveness. This leads to a strong need for Athena metrics that can give details about the following:</p>
<ul>
<li>Users</li>
<li>Amount of data scanned (to monitor the cost of AWS service usage)</li>
<li>The databases used for queries</li>
<li>Actual queries that teams run</li>
</ul>
<p>Currently, the Move, Inc. team does not have an easy way of obtaining all these metrics from a single tool. Having a way to do this would greatly simplify monitoring efforts. For example, the data operations team wants to collect several metrics every day obtained from queries run on Athena for their data. They require the following metrics:</p>
<ul>
<li>Amount of data scanned by each user</li>
<li>Number of queries by each user</li>
<li>Databases accessed by each user</li>
</ul>
<p>In this post, I discuss how to build a solution for monitoring Athena usage. To build this solution, you rely on <a href="https://aws.amazon.com/cloudtrail">AWS CloudTrail</a>. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an S3 bucket.</p>
<h2>Solution</h2>
<p>Here is the high-level overview:</p>
<ol>
<li>Use the CloudTrail API to audit the user queries, and then use Athena to create a table from the CloudTrail logs.</li>
<li>Query the Athena API with the <a href="https://aws.amazon.com/cli">AWS CLI</a> to gather metrics about the data scanned by the user queries and put this information into another table in Athena.</li>
<li>Combine the information from these two sources by joining the two tables.</li>
<li>Use the resulting data to analyze, build insights, and create a dashboard that shows the usage of Athena by users within different teams in the organization.</li>
</ol>
<p>The architecture of this solution is shown in the following diagram.</p>
<p><img class="alignnone size-full wp-image-4403" style="margin: 20px 0px 20px 0px" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/16/Realtor1_.png" alt="" width="800" height="539" /></p>
<p>Take a look at this solution step by step.</p>
<h3>IAM and permissions setup</h3>
<p>This solution uses CloudTrail, Athena, and S3. Make sure that the users who run the following scripts and steps have the appropriate IAM roles and policies. For more information, see <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html">Tutorial: Delegate Access Across AWS Accounts Using IAM Roles</a>.</p>
<h3>Step 1: Create a table in Athena for data in CloudTrail</h3>
<p>The CloudTrail API records all Athena queries run by different teams within the organization. These logs are saved in S3. The fields of most interest are:</p>
<ul>
<li>User identity</li>
<li>Start time of the API call</li>
<li>Source IP address</li>
<li>Request parameters</li>
<li>Response elements returned by the service</li>
</ul>
<p>When end users make queries in Athena, these queries are recorded by CloudTrail as responses from Athena web service calls. In these responses, each query is represented as a JSON (JavaScript Object Notation) string.</p>
<p>You can use the following CREATE TABLE statement to create the cloudtrail_logs table in Athena. For more information, see <a href="http://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html">Querying CloudTrail Logs</a> in the Athena documentation.</p>
<div class="hide-language">
<pre><code class="lang-sql">CREATE EXTERNAL TABLE cloudtrail_logs (
eventversion STRING,
userIdentity STRUCT&lt; type:STRING,
principalid:STRING,
arn:STRING,
accountid:STRING,
invokedby:STRING,
accesskeyid:STRING,
userName:String,
sessioncontext:STRUCT&lt; attributes:STRUCT&lt; mfaauthenticated:STRING,
creationdate:STRING&gt;,
sessionIssuer:STRUCT&lt; type:STRING,
principalId:STRING,
arn:STRING,
accountId:STRING,
userName:STRING&gt;&gt;&gt;,
eventTime STRING,
eventSource STRING,
eventName STRING,
awsRegion STRING,
sourceIpAddress STRING,
userAgent STRING,
errorCode STRING,
errorMessage STRING,
requestId STRING,
eventId STRING,
resources ARRAY&lt;STRUCT&lt; ARN:STRING,
accountId:STRING,
type:STRING&gt;&gt;,
eventType STRING,
apiVersion STRING,
readOnly BOOLEAN,
recipientAccountId STRING,
sharedEventID STRING,
vpcEndpointId STRING,
requestParameters STRING,
responseElements STRING,
additionalEventData STRING,
serviceEventDetails STRING
)
ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'
STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://&lt;s3 location of the CloudTrail logs&gt;'; </code></pre>
</div>
<h3>Step 2: Create a table in Amazon Athena for data from API output</h3>
<p>Athena provides an API that can be queried to obtain information of a specific query ID. It also provides an API to obtain information of a batch of query IDs, with a batch size of up to 50 query IDs.</p>
<p>You can use this API call to obtain information about the Athena queries that you are interested in and store this information in an S3 location. Create an Athena table to represent this data in S3. For the purpose of this post, the response fields that are of interest are as follows:</p>
<ul>
<li>QueryExecutionId</li>
<li>Database</li>
<li>EngineExecutionTimeInMillis</li>
<li>DataScannedInBytes</li>
<li>Status</li>
<li>SubmissionDateTime</li>
<li>CompletionDateTime</li>
</ul>
<p>The CREATE TABLE statement for athena_api_output, is as follows:</p>
<div class="hide-language">
<pre><code class="lang-sql">CREATE EXTERNAL TABLE IF NOT EXISTS athena_api_output(
queryid string,
querydatabase string,
executiontime bigint,
datascanned bigint,
status string,
submissiondatetime string,
completiondatetime string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ','
) LOCATION 's3://&lt;s3 location of the output from the API calls&gt;'
TBLPROPERTIES ('has_encrypted_data'='false')</code></pre>
</div>
<p>You can inspect the query IDs and user information for the last day. The query is as follows:</p>
<div class="hide-language">
<pre><code class="lang-sql">with data AS (
SELECT
json_extract(responseelements,
'$.queryExecutionId') AS query_id,
(useridentity.arn) AS uid,
(useridentity.sessioncontext.sessionIssuer.userName) AS role,
from_iso8601_timestamp(eventtime) AS dt
FROM cloudtrail_logs
WHERE eventsource='athena.amazonaws.com'
AND eventname='StartQueryExecution'
AND json_extract(responseelements, '$.queryExecutionId') is NOT null)
SELECT *
FROM data
WHERE dt &gt; date_add('day',-1,now() )</code></pre>
</div>
<h3>Step 3: Obtain Query Statistics from Athena API</h3>
<p>You can write a simple Python script to loop through queries in batches of 50 and query the Athena API for query statistics. You can use the <a href="https://github.com/boto/boto3">Boto</a> library for these lookups. Boto is a library that provides you with an easy way to interact with and automate your AWS development. The response from the Boto API can be parsed to extract the fields that you need as described in Step 2.</p>
<p>An example Python script is available in the <a href="https://github.com/MoveInc/AthenaMetrics">AthenaMetrics</a> GitHub repo.</p>
<p>Format these fields, for each query ID, as CSV strings and store them for the entire batch response in an S3 bucket. This S3 bucket is represented by the table created in Step 2, cloudtrail_logs.</p>
<p>In your Python code, create a variable named sql_query, and assign it a string representing the SQL query defined in Step 2. The s3_query_folder is the location in S3 that is used by Athena for storing results of the query. The code is as follows:</p>
<div class="hide-language">
<pre><code class="lang-python">sql_query =
“””
with data AS (
SELECT
json_extract(responseelements,
'$.queryExecutionId') AS query_id,
(useridentity.arn) AS uid,
(useridentity.sessioncontext.sessionIssuer.userName) AS role,
from_iso8601_timestamp(eventtime) AS dt
FROM cloudtrail_logs
WHERE eventsource='athena.amazonaws.com'
AND eventname='StartQueryExecution'
AND json_extract(responseelements, '$.queryExecutionId') is NOT null)
SELECT *
FROM data
WHERE dt &gt; date_add('day',-1,now() )
“””
athena_client = boto3.client('athena')
query_execution = self.client.start_query_execution(
QueryString=sql_query,
ClientRequestToken=str(uuid.uuid4()),
ResultConfiguration={
'OutputLocation': s3_staging_folder,
}
)
query_execution_id = query_execution['QueryExecutionId']
### Allow query to complete, check for status response[&quot;QueryExecution&quot;][&quot;Status&quot;][&quot;State&quot;]
response = athena_client.get_query_execution(QueryExecutionId=query_execution_id)
if response[“QueryExecution”][“Status”][“State”] == “SUCCEEDED”:
results = athena_client.get_query_results(QueryEecutionId=query_exection_id)</code></pre>
</div>
<p>You can iterate through the results in the response object and consolidate them in batches of 50 results. For each batch, you can invoke the Athena API, batch-get-query-execution.</p>
<p>Store the output in the S3 location pointed to by the CREATE TABLE definition for the table athena_api_output, in Step 2. The SQL statement above returns only queries run in the last 24 hours. You may want to increase that to get usage over a longer period of time. The code snippet for this API call is as follows:</p>
<div class="hide-language">
<pre><code class="lang-python">response = athena_client.batch_get_query_execution(
QueryExecutionIds=batchqueryids
)</code></pre>
</div>
<p>The batchqueryids value is an array of 50 query IDs extracted from the result set of the SELECT query. This script creates the data needed by your second table, athena_api_output, and you are now ready to join both tables in Athena.</p>
<h3>Step 4: Join the CloudTrail and Athena API data</h3>
<p>Now that the two tables are available with the data that you need, you can run the following Athena query to look at the usage by user. You can limit the output of this query to the most recent five days.</p>
<div class="hide-language">
<pre><code class="lang-sql">SELECT
c.useridentity.arn,
json_extract(c.responseelements, '$.queryExecutionId') qid,
a.datascanned,
a.querydatabase,
a.executiontime,
a.submissiondatetime,
a.completiondatetime,
a.status
FROM cloudtrail_logs c
JOIN athena_api_output a
ON cast(json_extract(c.responseelements, '$.queryExecutionId') as varchar) = a.queryid
WHERE eventsource = 'athena.amazonaws.com'
AND eventname = 'StartQueryExecution'
AND from_iso8601_timestamp(eventtime) &gt; date_add('day',-5 ,now() )</code></pre>
</div>
<h3>Step 5: Analyze and visualize the results</h3>
<p>In this step, using QuickSight, you can create a dashboard that shows the following metrics:</p>
<ul>
<li>Average amount of data scanned (MB) by a user and database</li>
<li>Number of queries per user</li>
<li>Count of queries per database</li>
</ul>
<p>For more information, see <a href="https://docs.aws.amazon.com/quicksight/latest/user/working-with-dashboards.html">Working with Dashboards</a>.</p>
<p><img class="alignnone size-full wp-image-4404" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/16/Realtor2.png" alt="" width="800" height="436" /></p>
<h2>Conclusion</h2>
<p>Using the solution described in this post, you can continuously monitor the usage of Athena by various teams. Taking this a step further, you can automate and set user limits for how much data the Athena users in your team can query within a given period of time. You may also choose to add notifications when the usage by a particular user crosses a specified threshold. This helps you manage costs incurred by different teams in your organization.</p>
<p><em>Realtor.com would like to acknowledge the tremendous support and guidance provided by Hemant Borole, Senior Consultant, Big Data &amp; Analytics with AWS Professional Services in helping to author this post.</em></p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/build-a-schema-on-read-analytics-pipeline-using-amazon-athena/" target="_blank" rel="noopener noreferrer">Build a Schema-on-Read Analytics Pipeline Using Amazon Athena</a> and <a href="https://aws.amazon.com/blogs/big-data/query-and-visualize-aws-cost-and-usage-data-using-amazon-athena-and-amazon-quicksight/" target="_blank" rel="noopener noreferrer">Query and Visualize AWS Cost and Usage Data Using Amazon Athena and Amazon QuickSight</a>.</p>
<p><img class="alignnone size-thumbnail wp-image-4356" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/vpc-150x150.png" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Author</h3>
<p><img class="size-full wp-image-4354 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/19/Ajay.png" alt="" width="113" height="150" /></p>
<p><strong>Ajay Rathod is Staff Data Engineer at Realtor.com.</strong> With a deep background in AWS Cloud Platform and Data Infrastructure, Ajay leads the Data Engineering and automation aspect of Data Operations at Realtor.com. He has designed and deployed many ETL pipelines and workflows for the Realtor Data Analytics Platform using AWS services like Data Pipeline, Athena, Batch, Glue and Boto3. He has created various operational metrics to monitor ETL Pipelines and Resource Usage.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>How I built a data warehouse using Amazon Redshift and AWS services in record timehttps://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/
Mon, 12 Feb 2018 15:22:58 +000027cd94a6210bac6805d4efd812b417d69b892e8eOver the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.<p><em><img class="alignright wp-image-4368 size-medium" style="margin: 20px 20px 20px 20px" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/09/Cerberus4-300x187.png" alt="" width="300" height="187" />This is a customer post by Stephen Borg, the Head of Big Data and BI at <a href="http://cerberus.io/" target="_blank" rel="noopener noreferrer">Cerberus Technologies</a>.</em></p>
<p>Cerberus Technologies, in their own words: <em>Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.</em></p>
<p>Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of <a href="https://aws.amazon.com/redshift/" target="_blank" rel="noopener noreferrer">Amazon Redshift</a> and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.<span id="more-4363"></span></p>
<p>In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.</p>
<p>With <a href="https://aws.amazon.com/redshift/" target="_blank" rel="noopener noreferrer">Amazon Redshift</a> and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.</p>
<h2>Handling data feeds</h2>
<p>I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.</p>
<p>To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on <a href="https://aws.amazon.com/ecs/" target="_blank" rel="noopener noreferrer">Amazon EC2 Container Service</a> (Amazon ECS) and feed data to <a href="https://aws.amazon.com/kinesis/" target="_blank" rel="noopener noreferrer">Amazon Kinesis</a> Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener noreferrer">AWS Lambda</a> function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by <a href="https://aws.amazon.com/kinesis/data-firehose/" target="_blank" rel="noopener noreferrer">Amazon Kinesis Data Firehose</a>. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.</p>
<p><img class="alignnone size-full wp-image-4369" style="margin: 20px 0px 20px 0px" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/09/Cerberus1.png" alt="" width="800" height="252" /></p>
<p>Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to <a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer">Amazon S3</a>. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.</p>
<p><img class="alignnone size-full wp-image-4370" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/09/Cerberus2.png" alt="" width="800" height="231" /></p>
<p>I also monitor the duration of function execution using <a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener noreferrer">Amazon CloudWatch</a> and <a href="https://aws.amazon.com/xray/" target="_blank" rel="noopener noreferrer">AWS X-Ray</a>.</p>
<p>Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.</p>
<p>This data is automatically crawled by <a href="https://aws.amazon.com/glue/" target="_blank" rel="noopener noreferrer">AWS Glue</a> and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using <a href="https://aws.amazon.com/athena/" target="_blank" rel="noopener noreferrer">Amazon Athena</a> or through <a href="https://aws.amazon.com/redshift/spectrum/" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum</a>. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.</p>
<h2>Working with Amazon Athena and Amazon Redshift for analysis</h2>
<p>I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.</p>
<p>For our data analysts and data scientists, we’ve selected <a href="https://aws.amazon.com/redshift/" target="_blank" rel="noopener noreferrer">Amazon Redshift</a>. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.</p>
<p>To load data directly into Amazon Redshift, I use <a href="https://aws.amazon.com/datapipeline/" target="_blank" rel="noopener noreferrer">AWS Data Pipeline</a> to orchestrate data workflows. I create <a href="https://aws.amazon.com/emr/" target="_blank" rel="noopener noreferrer">Amazon EMR</a> clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with <a href="https://aws.amazon.com/rds/" target="_blank" rel="noopener noreferrer">Amazon RDS</a>, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on <a href="https://aws.amazon.com/elasticbeanstalk/" target="_blank" rel="noopener noreferrer">AWS Elastic Beanstalk</a>. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use <a href="https://quicksight.aws/" target="_blank" rel="noopener noreferrer">Amazon QuickSight</a> for self-service BI to slice, dice, and visualize the data depending on their requirements.</p>
<p><img class="alignnone size-full wp-image-4371" style="margin: 20px 0px 20px 0px" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/09/Cerberus3.png" alt="" width="610" height="462" /></p>
<p>Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).</p>
<p>In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/deploy-a-data-warehouse-quickly-with-amazon-redshift-amazon-rds-for-postgresql-and-tableau-server/" target="_blank" rel="noopener noreferrer">Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server</a> and <a href="https://aws.amazon.com/blogs/big-data/top-8-best-practices-for-high-performance-etl-processing-using-amazon-redshift/" target="_blank" rel="noopener noreferrer">Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift</a>.</p>
<p><img class="alignnone size-thumbnail wp-image-2822" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2017/08/09/data_warehouse_quick_start-150x150.gif" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Author</h3>
<p><img class="size-full wp-image-4364 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/09/Borg.png" alt="" width="113" height="153" /></p>
<p><strong>Stephen Borg is the Head of Big Data and BI at Cerberus Technologies.</strong> He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorizationhttps://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/
Tue, 06 Feb 2018 23:42:57 +000084a00c54041073cdee7296a84826a2d493a71a14In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples.<p>One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using <a href="https://aws.amazon.com/about-aws/whats-new/2017/11/now-enable-kerberos-authentication-and-emrfs-authorization-in-amazon-emr/" target="_blank" rel="noopener noreferrer">EMRFS (EMR File System) authorization</a>, the <a href="https://aws.amazon.com/s3/" target="_blank" rel="noopener noreferrer">Amazon S3</a> storage-level authorization on <a href="https://aws.amazon.com/emr/" target="_blank" rel="noopener noreferrer">Amazon EMR</a>.</p>
<p>Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.</p>
<p>Simply put, on Amazon EMR, we can now have an <a href="https://aws.amazon.com/ec2/" target="_blank" rel="noopener noreferrer">Amazon EC2</a> role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.</p>
<p>In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.<span id="more-4324"></span></p>
<h2>EMRFS authorization for Amazon S3</h2>
<p>There are two prerequisites for using this feature:</p>
<ol>
<li>Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a <a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-kerberos.html" target="_blank" rel="noopener noreferrer">Kerberos</a>-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.</li>
<li>The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.</li>
</ol>
<p>EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.</p>
<p>Assume that you have the following three identities in your organization, and they are defined in the Active Directory:</p>
<p><img class="alignnone size-full wp-image-4330" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs1.png" alt="" width="800" height="210" /></p>
<p>To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:</p>
<p><img class="alignnone size-full wp-image-4331" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs2.png" alt="" width="800" height="158" /></p>
<p>In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named <strong>EMR_EC2_RestrictedRole</strong>. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.</p>
<p><strong>Important: </strong>For all other roles, like <strong>emrfs_auth_group_role_data_eng</strong>, you need to add the base role (<strong>EMR_EC2_RestrictedRole</strong>) as the trusted entity so that it can assume other roles. See the following example:</p>
<div class="hide-language">
<pre><code class="lang-json">{
&quot;Version&quot;: &quot;2012-10-17&quot;,
&quot;Statement&quot;: [
{
&quot;Effect&quot;: &quot;Allow&quot;,
&quot;Principal&quot;: {
&quot;Service&quot;: &quot;ec2.amazonaws.com&quot;
},
&quot;Action&quot;: &quot;sts:AssumeRole&quot;
},
{
&quot;Effect&quot;: &quot;Allow&quot;,
&quot;Principal&quot;: {
&quot;AWS&quot;: &quot;arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole&quot;
},
&quot;Action&quot;: &quot;sts:AssumeRole&quot;
}
]
}</code></pre>
</div>
<p>The following is an example policy for the admin user role <strong>(emrfs_auth_user_role_admin_user)</strong>:</p>
<div class="hide-language">
<pre><code class="lang-json">{
&quot;Version&quot;: &quot;2012-10-17&quot;,
&quot;Statement&quot;: [
{
&quot;Effect&quot;: &quot;Allow&quot;,
&quot;Action&quot;: &quot;s3:*&quot;,
&quot;Resource&quot;: &quot;*&quot;
}
]
}</code></pre>
</div>
<p>We are assuming the admin user has access to all buckets in this example.</p>
<p>The following is an example policy for the data science group role <strong>(emrfs_auth_group_role_data_sci)</strong>:</p>
<div class="hide-language">
<pre><code class="lang-json">{
&quot;Version&quot;: &quot;2012-10-17&quot;,
&quot;Statement&quot;: [
{
&quot;Effect&quot;: &quot;Allow&quot;,
&quot;Resource&quot;: [
&quot;arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*&quot;,
&quot;arn:aws:s3:::emrfs-auth-data-science-bucket-demo&quot;
],
&quot;Action&quot;: [
&quot;s3:*&quot;
]
}
]
}</code></pre>
</div>
<p>This role grants all Amazon S3 permissions to the <strong>emrfs-auth-data-science-bucket-demo</strong> bucket and all the objects in it. Similarly, the policy for the role <strong>emrfs_auth_group_role_data_eng </strong>is shown below<strong>:</strong></p>
<div class="hide-language">
<pre><code class="lang-json">{
&quot;Version&quot;: &quot;2012-10-17&quot;,
&quot;Statement&quot;: [
{
&quot;Effect&quot;: &quot;Allow&quot;,
&quot;Resource&quot;: [
&quot;arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*&quot;,
&quot;arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo&quot;
],
&quot;Action&quot;: [
&quot;s3:*&quot;
]
}
]
}</code></pre>
</div>
<h3>Example role mappings configuration</h3>
<p>To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post<img class="alignnone size-full wp-image-4333" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs3.png" alt="" width="800" height="216" /></p>
<p>Consider the following scenario.</p>
<p>First, the admin user <strong>admin1</strong> tries to log in and run a command to access Amazon S3 data through EMRFS. The first role <strong>emrfs_auth_user_role_admin_user</strong> on the mapping list, which is a user role, is mapped and picked up. Then <strong>admin1</strong> has access to the Amazon S3 locations that are defined in this role.</p>
<p>Then a user from the data engineer group (<strong>grp_data_engineering</strong>) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the <strong>grp_data_engineering</strong> group, the group role <strong>emrfs_auth_group_role_data_eng</strong> is assumed, and the user has proper access to Amazon S3 that is defined in the <strong>emrfs_auth_group_role_data_eng</strong> role.</p>
<p>Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the <em>prefix</em> type. If the user is trying to access <tt>s3://emrfs-auth-default-bucket-demo/</tt>, then the prefix mapping is in effect, and the prefix role <strong>emrfs_auth_prefix_role_default_s3_prefix</strong> is assumed.</p>
<p>If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the <strong>EMR_EC2RestrictedRole</strong>. This role is assumed by the EC2 instances in the cluster.</p>
<p>In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.</p>
<h2>Setting up an EMR cluster and mapping Active Directory users and groups</h2>
<p>Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.</p>
<h3>Active Directory setup</h3>
<p>Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).</p>
<p>To create and join Active Directory to Amazon EMR, follow the steps in the blog post <a href="https://aws.amazon.com/blogs/big-data/use-kerberos-authentication-to-integrate-amazon-emr-with-microsoft-active-directory/" target="_blank" rel="noopener noreferrer">Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory</a>.</p>
<p>After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like <strong>admin1</strong>, <strong>dataeng1</strong>, <strong>datascientist1</strong>, <strong>grp_data_engineering</strong>, and <strong>grp_data_science</strong>, and then add the users to the right groups.</p>
<h3>Join the EMR cluster to an Active Directory domain</h3>
<p>For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.</p>
<p>The following is an example of the EMR security configuration with a trusted Active Directory domain <tt>EMRKRB.TEST.COM</tt> and the EMRFS role mappings as we discussed earlier:</p>
<p><img class="alignnone size-full wp-image-4336" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs4.png" alt="" width="800" height="198" /></p>
<p>The EMRFS role mapping configuration is shown in this example:</p>
<p><img class="alignnone size-full wp-image-4337" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs5-1.png" alt="" width="800" height="216" /></p>
<p>We will also provide an example AWS CLI command that you can run.</p>
<h2>Launching the EMR cluster and running the tests</h2>
<p>Now you have configured Kerberos and EMRFS authorization for Amazon S3.</p>
<p>Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.</p>
<div class="hide-language">
<pre><code class="lang-json">[
{
&quot;Classification&quot;:&quot;hue-ini&quot;,
&quot;Properties&quot;:{
},
&quot;Configurations&quot;:[
{
&quot;Classification&quot;:&quot;desktop&quot;,
&quot;Properties&quot;:{
},
&quot;Configurations&quot;:[
{
&quot;Classification&quot;:&quot;ldap&quot;,
&quot;Properties&quot;:{
},
&quot;Configurations&quot;:[
{
&quot;Classification&quot;:&quot;ldap_servers&quot;,
&quot;Properties&quot;:{
},
&quot;Configurations&quot;:[
{
&quot;Classification&quot;:&quot;AWS&quot;,
&quot;Properties&quot;:{
&quot;base_dn&quot;:&quot;DC=emrkrb,DC=test,DC=com&quot;,
&quot;ldap_url&quot;:&quot;ldap://emrkrb.test.com&quot;,
&quot;search_bind_authentication&quot;:&quot;false&quot;,
&quot;bind_dn&quot;:&quot;CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com&quot;,
&quot;bind_password&quot;:&quot;Abc123456&quot;,
&quot;create_users_on_login&quot;:&quot;true&quot;,
&quot;nt_domain&quot;:&quot;emrkrb.test.com&quot;
},
&quot;Configurations&quot;:[
]
}
]
}
]
},
{
&quot;Classification&quot;:&quot;auth&quot;,
&quot;Properties&quot;:{
&quot;backend&quot;:&quot;desktop.auth.backend.LdapBackend&quot;
},
&quot;Configurations&quot;:[
]
}
]
}
]
}</code></pre>
</div>
<p><strong>Note:</strong> In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.</p>
<p>Now let’s use this configuration and the security configuration you created before to launch the cluster.</p>
<p>In the Amazon EMR console, choose <strong>Create cluster</strong>. Then choose<strong> Go to advanced options</strong>. On the <strong>Step1: Software and Steps</strong> page, under <strong>Edit software settings (optional)</strong>, paste the configuration in the box.</p>
<p><img class="alignnone size-full wp-image-4339" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs6.png" alt="" width="800" height="256" /></p>
<p>The rest of the setup is the same as an ordinary cluster setup, except in the <strong>Security Options</strong> section. In <strong>Step 4: Security</strong>, under <strong>Permissions</strong>, choose <strong>Custom</strong>, and then choose the <strong>RestrictedRole</strong> that you created before.</p>
<p>Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the <a href="Amazon%20EMR%20Management%20Guide" target="_blank" rel="noopener noreferrer">Amazon EMR Management Guide</a> for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.</p>
<p>Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.</p>
<p>You can use the following AWS CLI command to create a cluster.</p>
<div class="hide-language">
<pre><code class="lang-code">aws emr create-cluster --name &quot;TestEMRFSAuthorization&quot; \
--release-label emr-5.10.0 \ --instance-type m3.xlarge \
--instance-count 3 \
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \
--security-configuration MyKerberosConfig \
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=&lt;YourClusterKDCAdminPassword&gt;, \ ADDomainJoinUser=&lt;YourADUserLogonName&gt;,ADDomainJoinPassword=&lt;YourADUserPassword&gt;, \
CrossRealmTrustPrincipalPassword=&lt;MatchADTrustPwd&gt;</code></pre>
</div>
<p><strong>Note:</strong> If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.</p>
<p>After the cluster gets into the <strong>Waiting</strong> state, try to connect by using SSH into the cluster using the Active Directory user name and password.</p>
<div class="hide-language">
<pre><code class="lang-code">ssh -l <span style="color: #0000ff">aduser@ad.domain</span> &lt;EMR IP or DNS name&gt;</code></pre>
</div>
<p>Quickly run two commands to show that the Active Directory join is successful:</p>
<ol>
<li><tt>id [user name]</tt> shows the mapped AD users and groups in Linux.</li>
<li><tt>hdfs groups [user name]</tt> shows the mapped group in Hadoop.</li>
</ol>
<p>Both should return the current Active Directory user and group information if the setup is correct.</p>
<p>Now, you can test the user mapping first. Log in with the <strong>admin1</strong> user, and run a Hadoop list directory command:</p>
<div class="hide-language">
<pre><code class="lang-code">hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/</code></pre>
</div>
<p>Now switch to a user from the data engineer group.</p>
<p>Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 <tt>Access Denied</tt> exception.</p>
<p>When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.</p>
<div class="hide-language">
<pre><code class="lang-code">hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/</code></pre>
</div>
<p>It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.</p>
<p>&nbsp;</p>
<p>To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under <tt>/user/&lt;username&gt;</tt>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.</p>
<p>First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, <tt>s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/</tt>.</p>
<p><img class="alignnone size-full wp-image-4341" style="margin: 20px 0px 20px 0px" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs7.png" alt="" width="721" height="706" /></p>
<p><img class="alignnone size-full wp-image-4342" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs8.png" alt="" width="800" height="264" /></p>
<p>You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.</p>
<p><img class="alignnone size-full wp-image-4345" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs9.png" alt="" width="800" height="306" /></p>
<p>It failed and threw an Amazon S3 <tt>Access Denied</tt> error.</p>
<p>Now insert one line of data into the successfully create table.</p>
<p><img class="alignnone size-full wp-image-4346" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs10.png" alt="" width="800" height="315" /></p>
<p><img class="alignnone size-full wp-image-4347" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs11.png" alt="" width="800" height="490" /></p>
<p>Next, log out, switch to a data science group user, and create another table, <tt>test2_datasci_tb</tt>.</p>
<p><img class="alignnone size-full wp-image-4348" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emfrs12.png" alt="" width="800" height="211" /></p>
<p>The creation is successful.</p>
<p>The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).</p>
<p>Now let’s come back to the command line and run some Spark commands.</p>
<p>Login to the master node using the <strong>datascientist1</strong> user:</p>
<p><img class="alignnone size-full wp-image-4349" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emrfs13.png" alt="" width="800" height="42" /></p>
<p>Start the SparkSQL interactive shell by typing <tt>spark-sql</tt>, and run the <tt>show tables</tt> command. It should list the tables that you created using Hive.</p>
<p><img class="alignnone size-full wp-image-4350" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emrfs14.png" alt="" width="800" height="135" /></p>
<p>As a data science group user, try <tt>select</tt> on both tables. You will find that you can only select the table defined in the location that your group has access to.</p>
<p><img class="alignnone size-full wp-image-4351" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/emrfs15.png" alt="" width="800" height="82" /></p>
<h2>Conclusion</h2>
<p>EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/use-kerberos-authentication-to-integrate-amazon-emr-with-microsoft-active-directory/" target="_blank" rel="noopener noreferrer">Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory</a> and <a href="https://aws.amazon.com/blogs/big-data/launching-and-running-an-amazon-emr-cluster-inside-a-vpc/" target="_blank" rel="noopener noreferrer">Launching and Running an Amazon EMR Cluster inside a VPC</a>.</p>
<p><img class="alignnone size-thumbnail wp-image-4356" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/vpc-150x150.png" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Authors</h3>
<p><img class="size-full wp-image-4354 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/06/Songzhi.png" alt="" width="113" height="150" /></p>
<p><strong>Songzhi Liu is a Big Data Consultant with AWS Professional Services.</strong> He works closely with AWS customers to provide them Big Data &amp; Machine Learning solutions and best practices on the Amazon cloud.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>Dynamically Create Friendly URLs for Your Amazon EMR Web Interfaceshttps://aws.amazon.com/blogs/big-data/dynamically-create-friendly-urls-for-your-amazon-emr-web-interfaces/
Sun, 04 Feb 2018 02:04:24 +00004990f509ae0874ed835a1293d774b4ada8285a7aThis solution provides a serverless approach to automatically assigning a friendly name for your EMR cluster for easy access to popular notebooks and other web interfaces.<p><a href="https://aws.amazon.com/emr" target="_blank" rel="noopener noreferrer">Amazon EMR</a> enables data analysts and scientists to deploy a cluster running popular frameworks such as Spark, HBase, Presto, and Flink of any size in minutes. When you launch a cluster, Amazon EMR automatically configures the underlying Amazon EC2 instances with the frameworks and applications that you choose for your cluster. This can include popular web interfaces such as Hue workbench, Zeppelin notebook, and Ganglia monitoring dashboards and tools.</p>
<p>These web interfaces are hosted on the EMR master node and must be accessed using the public DNS name of the master node (master public DNS value). The master public DNS value is dynamically created, not very user friendly and is hard to remember— it looks something like <em>ip-###-###-###-###.us-west-2.compute.internal</em>. Not having a friendly URL to connect to the popular workbench or notebook interfaces may impact the workflow and hinder your gained agility.</p>
<p>Some customers have addressed this challenge through custom bootstrap actions, steps, or external scripts that periodically check for new clusters and register a friendlier name in DNS. These approaches either put additional burden on the data practitioners or require additional resources to execute the scripts. In addition, there is typically some lag time associated with such scripts. They often don’t do a great job cleaning up the DNS records after the cluster has terminated, potentially resulting in a security risk.</p>
<p>The solution in this post provides an automated, serverless approach to registering a friendly master node name for easy access to the web interfaces.<span id="more-4296"></span></p>
<h2>AWS services</h2>
<p>Inspired in part by our colleague’s post on <a href="https://aws.amazon.com/blogs/compute/building-a-dynamic-dns-for-route-53-using-cloudwatch-events-and-lambda/" target="_blank" rel="noopener noreferrer">Building a Dynamic DNS for Route 53 using CloudWatch Events and Lambda</a>, this solution leverages <a href="https://aws.amazon.com/cloudwatch" target="_blank" rel="noopener noreferrer">Amazon CloudWatch Events</a>, <a href="https://aws.amazon.com/lambda" target="_blank" rel="noopener noreferrer">AWS Lambda</a>, and <a href="https://aws.amazon.com/route53" target="_blank" rel="noopener noreferrer">Amazon Route 53</a> to dynamically register a CNAME with a friendly name in a Route 53 private hosted zone.</p>
<p>Before I dive deeper, I review these key services and how they are part of this solution.</p>
<h3>CloudWatch Events</h3>
<p>CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. Using simple rules, you can match events and route them to one or more target functions or streams. An event can be generated in one of four ways:</p>
<ul>
<li>From an AWS service when resources change state</li>
<li>From API calls that are delivered via AWS CloudTrail</li>
<li>From your own code that can generate application-level events</li>
<li>Issued on <a href="https://en.wikipedia.org/wiki/Cron" target="_blank" rel="noopener noreferrer">cron</a>-style scheduling</li>
</ul>
<p>In this solution, I cover the first type of event, which is automatically emitted by EMR when the cluster state changes. Based on the state of this event, either create or update the DNS record in Route 53 when the cluster state changes to STARTING, or delete the DNS record when the cluster is no longer needed and the state changes to TERMINATED. For more information about all EMR event details, see <a href="http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-manage-cloudwatch-events.html" target="_blank" rel="noopener noreferrer">Monitor CloudWatch Events</a><em>. </em></p>
<h3>Route 53 private hosted zones</h3>
<p>A <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html" target="_blank" rel="noopener noreferrer">private hosted zone</a> is a container that holds information about how to route traffic for a domain and its subdomains within one or more VPCs. Private hosted zones enable you to use custom DNS names for your internal resources without exposing the names or IP addresses to the internet.</p>
<p>Route 53 supports resource record sets with a wide range of <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html" target="_blank" rel="noopener noreferrer">record types</a>. In this solution, you use a CNAME record that is used to specify a domain name as an alias for another domain (the ‘canonical’ domain). You use a friendly name of the cluster as the CNAME for the EMR master public DNS value.</p>
<p>You are using private hosted zones because an EMR cluster is typically deployed within a private subnet and is accessed either from within the VPC or from on-premises resources over VPN or AWS Direct Connect. To resolve domain names in private hosted zones from your on-premises network, configure a DNS forwarder, as described in <a href="https://aws.amazon.com/premiumsupport/knowledge-center/r53-private-ubuntu/" target="_blank" rel="noopener noreferrer">How can I resolve Route 53 private hosted zones from an on-premises network via an Ubuntu instance?</a>.</p>
<h3>Lambda</h3>
<p>Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda executes your code only when needed and scales automatically to thousands of requests per second. Lambda takes care of high availability, and server and OS maintenance and patching. You pay only for the consumed compute time. There is no charge when your code is not running.</p>
<p>Lambda provides the ability to invoke your code in response to events, such as when an object is put to an Amazon S3 bucket or as in this case, when a CloudWatch event is emitted. As part of this solution, you deploy a Lambda function as a target that is invoked by CloudWatch Events when the event matches your rule. You also configure the necessary permissions based on the Lambda permissions model, including a <a href="https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html" target="_blank" rel="noopener noreferrer">Lambda function policy</a> and <a href="https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role" target="_blank" rel="noopener noreferrer">Lambda execution role</a>.</p>
<h2>Putting it all together</h2>
<p><em>Now that you have all of the pieces, you can put together a complete solution. The following diagram illustrates how the solution works:</em></p>
<p><img class="alignnone size-full wp-image-4297" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/01/Dynamic1.png" alt="" width="800" height="392" /></p>
<ol>
<li>Start with a user activity such as launching or terminating an EMR cluster.</li>
<li>EMR automatically sends events to the CloudWatch Events stream.</li>
<li>A CloudWatch Events rule matches the specified event, and routes it to a target, which in this case is a Lambda function. In this case, you are using the <em>EMR Cluster State Change</em></li>
<li>The Lambda function performs the following key steps:
<ul>
<li>Get the <em>clusterId</em> value from the event detail and use it to call EMR. DescribeCluster API to retrieve the following data points:
<ul>
<li><em>MasterPublicDnsName</em> – public DNS name of the master node</li>
<li>Locate the tag containing the friendly name to use as the CNAME for the cluster. The key name containing the friendly name should be The value should be specified as host.domain.com, where domain is the private hosted zone in which to update the DNS record.</li>
</ul> </li>
<li>Update DNS based on the state in the event detail.
<ul>
<li>If the state is STARTING, the function calls the Route 53 API to create or update a resource record set in the private hosted zone specified by the domain tag. This is a CNAME record mapped to <em>MasterPublicDnsName</em>.</li>
<li>Conversely, if the state is TERMINATED, the function calls the Route 53 API to delete the associated resource record set from the private hosted zone.</li>
</ul> </li>
</ul> </li>
</ol>
<h2>Deploying the solution</h2>
<p>Because all of the components of this solution are serverless, use the <a href="https://github.com/awslabs/serverless-application-model" target="_blank" rel="noopener noreferrer">AWS Serverless Application Model</a> (AWS SAM) template to deploy the solution. AWS SAM is natively supported by AWS CloudFormation and provides a simplified syntax for expressing serverless resources, resulting in fewer lines of code.</p>
<h3>Overview of the SAM template</h3>
<p>For this solution, the SAM template has 76 lines of text as compared to 142 lines without SAM resources (and writing the template in YAML would be even slightly smaller). The solution can be deployed using the AWS Management Console, AWS Command Line Interface (AWS CLI), or <a href="https://github.com/awslabs/aws-sam-local" target="_blank" rel="noopener noreferrer">AWS SAM Local</a>.</p>
<p>CloudFormation <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html" target="_blank" rel="noopener noreferrer">transforms</a> help simplify template authoring by condensing a multiple-line resource declaration into a single line in your template. To inform CloudFormation that your template defines a serverless application, add a line under the template format version as follows:</p>
<div class="hide-language">
<pre><code class="lang-code">&quot;AWSTemplateFormatVersion&quot;:&quot;2010-09-09&quot;,
&quot;Transform&quot;:&quot;AWS::Serverless-2016-10-31&quot;,</code></pre>
</div>
<p>Before SAM, you would use the AWS::Lambda::Function resource type to define your Lambda function. You would then need a resource to define the permissions for the function (AWS::Lambda::Permission), another resource to define a Lambda execution role (AWS::IAM::Role), and finally a CloudWatch Events resource (Events::Rule) that triggers this function.</p>
<p>With SAM, you need to define just a single resource for your function, AWS::Serverless::Function. Using this single resource type, you can define everything that you need, including function properties such as function handler, runtime, and code URI, as well as the required IAM policies and the CloudWatch event.</p>
<div class="hide-language">
<pre><code class="lang-python">&quot;DnsSetterLambda&quot;:{
&quot;Type&quot;:&quot;AWS::Serverless::Function&quot;,
&quot;Properties&quot;:{
&quot;Handler&quot;:&quot;emr-dns-setter.lambda_handler&quot;,
&quot;Runtime&quot;:&quot;python2.7&quot;,
&quot;CodeUri&quot;:&quot;emr-dns-setter.py.zip&quot;,
&quot;Description&quot;:&quot;Create PHZ record for EMR cluster&quot;,
&quot;Timeout&quot;:90,
&quot;Policies&quot;:[
{
&quot;Version&quot;:&quot;2012-10-17&quot;,
&quot;Statement&quot;:[
{
&quot;Effect&quot;:&quot;Allow&quot;,
&quot;Action&quot;:&quot;ec2:Describe*&quot;,
&quot;Resource&quot;:&quot;*&quot;
},
{
&quot;Effect&quot;:&quot;Allow&quot;,
&quot;Action&quot;:[
&quot;elasticmapreduce:Describe*&quot;
],
&quot;Resource&quot;:&quot;*&quot;
},
{
&quot;Effect&quot;:&quot;Allow&quot;,
&quot;Action&quot;:[
&quot;logs:CreateLogGroup&quot;,
&quot;logs:CreateLogStream&quot;,
&quot;logs:PutLogEvents&quot;
],
&quot;Resource&quot;:&quot;*&quot;
},
{
&quot;Effect&quot;:&quot;Allow&quot;,
&quot;Action&quot;:[
&quot;route53:ChangeResourceRecordSets&quot;,
&quot;route53:GetHostedZone&quot;,
&quot;route53:ListHostedZones&quot;,
&quot;route53:ListHostedZonesByName&quot;,
&quot;route53:ListResourceRecordSets&quot;
],
&quot;Resource&quot;:[
&quot;*&quot;
]
}
]
}
],
&quot;Events&quot;:{
&quot;CloudWatchEventDNS&quot;:{
&quot;Type&quot;:&quot;CloudWatchEvent&quot;,
&quot;Properties&quot;:{
&quot;Pattern&quot;:{
&quot;source&quot;:[
&quot;aws.emr&quot;
],
&quot;detail-type&quot;:[
&quot;EMR Cluster State Change&quot;
]
}
}
}
}
}
}
},
&quot;Outputs&quot;:{
}
}</code></pre>
</div>
<p>&nbsp;</p>
<p>A few additional things to note in the code example:</p>
<ul>
<li><strong>CodeUri</strong> – Before you can deploy a SAM template, first upload your Lambda function code zip to S3. You can do this manually or use the <tt>aws cloudformation</tt> package CLI command to automate the task of uploading local artifacts to a S3 bucket, as shown later.</li>
<li><strong>Lambda execution role and permissions</strong> – You are not specifying a Lambda execution role in the template. Rather, you are providing the required permissions as IAM policy documents. When the template is submitted, CloudFormation expands the AWS::Serverless::Function resource, declaring a Lambda function and an execution role. The created role has two attached policies: a default AWSLambdaBasicExecutionRole and the inline policy specified in the template.</li>
<li><strong>CloudWatch Events rule</strong> – Instead of specifying a CloudWatch Events resource type, you are defining an event source object as a property of the function itself. When the template is submitted, CloudFormation expands this into a CloudWatch Events rule resource and automatically creates the Lambda resource-based permissions to allow the CloudWatch Events rule to trigger the function.</li>
</ul>
<h3>Deploying the solution using the console</h3>
<p>1.) Log in to the <a href="https://console.aws.amazon.com/cloudformation/" target="_blank" rel="noopener noreferrer">CloudFormation console</a> and choose <strong>Create stack</strong>.</p>
<p>2.) For <strong>Choose a template</strong>, select <strong>Specify an Amazon S3 template URL</strong> and enter the following URL:</p>
<div class="hide-language">
<pre><code class="lang-code">https://s3.amazonaws.com/aws-bigdata-blog/artifacts/emr-dns-setter/emr-dns-setter-sam.json
</code></pre>
</div>
<p>NOTE: If you are trying this solution outside of us-east-1, then you should download the necessary files, upload them to the buckets in your region, edit the script as appropriate and then run it or use the CLI deployment method below.</p>
<p>3.) Choose <strong>Next</strong>.</p>
<p>4.) On the <strong>Specify Details</strong> page, keep or modify the stack name and choose <strong>Next</strong>.</p>
<p>5.) On the <strong>Options</strong> page, choose <strong>Next</strong>.</p>
<p>6.) On the <strong>Review</strong> page, take the following steps:</p>
<ul>
<li>Acknowledge the two Transform access capabilities. This allows the CloudFormation transform to create the required IAM resources with custom names.</li>
</ul>
<p><img class="alignnone size-full wp-image-4302" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/01/Dynamic2.png" alt="" width="800" height="127" /></p>
<ul>
<li>Under <strong>Transforms</strong>, choose <strong>Create Change Set</strong>.</li>
</ul>
<p><img class="alignnone size-full wp-image-4303" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/01/Dynamic3.png" alt="" width="800" height="133" /></p>
<p>Wait a few seconds for the change set to be created before proceeding. The change set should look as follows:</p>
<p>7.) Choose <strong>Execute</strong> to deploy the template.</p>
<p>After the template is deployed, you should see four resources created:</p>
<p><img class="alignnone size-full wp-image-4304" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/01/Dynamic5.png" alt="" width="800" height="166" /></p>
<h3>Deploying the solution using the AWS CLI</h3>
<ul>
<li>Download the Lambda function code <a href="https://s3.amazonaws.com/aws-bigdata-blog/artifacts/emr-dns-setter/emr-dns-setter.py" target="_blank" rel="noopener noreferrer"><em>emr-dns-setter.py</em></a> and the SAM template <a href="https://s3.amazonaws.com/aws-bigdata-blog/artifacts/emr-dns-setter/emr-dns-setter-sam.json" target="_blank" rel="noopener noreferrer"><em>emr-dns-setter-sam.json</em></a> to your local machine.</li>
<li>Modify the CodeUri property in <em>emr-dns-setter-sam.json</em> template to the local path of the function code:</li>
</ul>
<div class="hide-language">
<pre><code class="lang-python">&lt;file path&gt;\emr-dns-setter.py</code></pre>
</div>
<ul>
<li>Use the <tt>aws cloudformation</tt> package CLI command to upload the package.</li>
</ul>
<div class="hide-language">
<pre><code class="lang-code">aws cloudformation package --template-file &lt;FILE_PATH&gt;\emr-dns-setter-sam.json --output-template-file serverless-output.template --s3-bucket &lt;BUCKET_USED_TO_UPLOAD_YOUR_ARTIFACTS&gt; </code></pre>
</div>
<p>After the package is successfully uploaded, the output should look as follows:</p>
<div class="hide-language">
<pre><code class="lang-code">Uploading to 0f6d12c7872b50b37dbfd5a60385b854 1872 / 1872.0 (100.00%)
Successfully packaged artifacts and wrote output template to file serverless-output.template.</code></pre>
</div>
<p>The CodeUri property in <em>serverless-output.template</em> is now referencing the packaged artifacts in the S3 bucket that you specified:</p>
<div class="hide-language">
<pre><code class="lang-code">s3://&lt;bucket&gt;/0f6d12c7872b50b37dbfd5a60385b854</code></pre>
</div>
<ul>
<li>Use the <tt>aws cloudformation</tt> deploy CLI command to deploy the stack:</li>
</ul>
<div class="hide-language">
<pre><code class="lang-code">aws cloudformation deploy --template-file &lt;FILE PATH&gt;\serverless-output.template --stack-name &lt;STACK_NAME&gt; --capabilities CAPABILITY_IAM </code></pre>
</div>
<p>You should see the following output after the stack has been successfully created:</p>
<div class="hide-language">
<pre><code class="lang-code">Waiting for changeset to be created...
Waiting for stack create/update to complete
Successfully created/updated stack – EmrDnsSetterCli</code></pre>
</div>
<h3>Validating results</h3>
<p>To test the solution, launch an EMR cluster. The Lambda function looks for the <em>cluster_name </em>tag associated with the EMR cluster. Make sure to specify the friendly name of your cluster as <em>host.domain.com</em> where the domain is the private hosted zone in which to create the CNAME record.</p>
<p>Here is a sample CLI command to launch a cluster within a specific subnet in a VPC with the required tag <em>cluster_name</em>.</p>
<div class="hide-language">
<pre><code class="lang-code">aws emr create-cluster --tags LOB=&quot;finance&quot; cluster_name=&quot;finance-ingest.domain.com&quot; --release-label emr-5.3.1 --use-default-roles --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --ec2-attributes SubnetId=subnet-xxxxxxxx,KeyName=keyname --no-auto-terminate</code></pre>
</div>
<p>After the cluster is launched, log in to the Route 53 console. In the left navigation pane, choose <strong>Hosted Zones</strong> to view the list of private and public zones currently configured in Route 53. Select the hosted zone that you specified in the ZONE tag when you launched the cluster. Verify that the resource records were created.</p>
<p><img class="alignnone size-full wp-image-4308" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/02/Dynamic6.png" alt="" width="595" height="182" /></p>
<p>You can also monitor the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cwe-metricscollected.html" target="_blank" rel="noopener noreferrer">CloudWatch Events metrics</a> that are published to CloudWatch every minute, such as the number of TriggeredRules and Invocations.</p>
<p>Now that you’ve verified that the Lambda function successfully updated the Route 53 resource records in the zone file, terminate the EMR cluster and verify that the records are removed by the same function.</p>
<h2>Conclusion</h2>
<p>This solution provides a serverless approach to automatically assigning a friendly name for your EMR cluster for easy access to popular notebooks and other web interfaces. CloudWatch Events also supports <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html" target="_blank" rel="noopener noreferrer">cross-account event delivery</a>, so if you are running EMR clusters in multiple AWS accounts, all cluster state events across accounts can be consolidated into a single account.</p>
<p>I hope that this solution provides a small glimpse into the power of CloudWatch Events and Lambda and how they can be leveraged with EMR and other AWS big data services. For example, by using the EMR step state change event, you can chain various pieces of your analytics pipeline. You may have a transient cluster perform data ingest and, when the task successfully completes, spin up an ETL cluster for transformation and upload to Amazon Redshift. The possibilities are truly endless.</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/securely-access-web-interfaces-on-amazon-emr-launched-in-a-private-subnet/" target="_blank" rel="noopener noreferrer">Securely Access Web Interfaces on Amazon EMR Launched in a Private Subnet</a> and <a href="https://aws.amazon.com/blogs/big-data/respond-to-state-changes-on-amazon-emr-clusters-with-amazon-cloudwatch-events/" target="_blank" rel="noopener noreferrer">Respond to State Changes on Amazon EMR Clusters with Amazon CloudWatch Events</a>.</p>
<p><img class="alignnone size-thumbnail wp-image-4320" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/05/AddRead-150x150.png" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Authors</h3>
<p><img class="size-full wp-image-4318 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/05/ilyep.png" alt="" width="113" height="140" /></p>
<p><strong>Ilya is a solutions architect with AWS. </strong>He helps customers to innovate on the AWS platform by building highly available, scalable, and secure architectures. He enjoys spending time outdoors and building Lego creations with his kids.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><strong><img class="size-full wp-image-4319 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/02/05/rdahlstr.png" alt="" width="113" height="148" />Roger is a solutions architect with AWS. </strong>He helps customers to implement cloud native architectures so they can focus more on what sets them apart as an organization. Outside of work, he can he found woodworking or cooking with his family.</p>
<p>&nbsp;</p>Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshifthttps://aws.amazon.com/blogs/big-data/top-8-best-practices-for-high-performance-etl-processing-using-amazon-redshift/
Fri, 26 Jan 2018 17:25:14 +0000b4e9c6fa57f27189bcd3ec60c668f229bd6882a9When migrating from a legacy data warehouse to Amazon Redshift, it is tempting to adopt a lift-and-shift approach, but this can result in performance and scale issues long term. This post guides you through the following best practices for ensuring optimal, consistent runtimes for your ETL processes.<p>An <a href="https://en.wikipedia.org/wiki/Extract,_transform,_load" target="_blank" rel="noopener noreferrer">ETL (Extract, Transform, Load)</a> process enables you to load data from source systems into your data warehouse. This is typically executed as a batch or near-real-time ingest process to keep the data warehouse current and provide up-to-date analytical data to end users.</p>
<p>Amazon Redshift is a fast, petabyte-scale <a href="https://aws.amazon.com/data-warehouse/" target="_blank" rel="noopener noreferrer">data warehouse</a>&nbsp;that enables you easily to make data-driven decisions. With Amazon Redshift, you can get insights into your big data in a cost-effective fashion using standard SQL. You can set up any type of data model, from star and snowflake schemas, to simple de-normalized tables for running any analytical queries.<span id="more-4278"></span></p>
<p>To operate a robust ETL platform and deliver data to Amazon Redshift in a timely manner, design your ETL processes to take account of Amazon Redshift’s architecture. When migrating from a legacy data warehouse to Amazon Redshift, it is tempting to adopt a lift-and-shift approach, but this can result in performance and scale issues long term. This post guides you through the following best practices for ensuring optimal, consistent runtimes for your ETL processes:</p>
<ul>
<li>COPY data from multiple, evenly sized files.</li>
<li>Use workload management to improve ETL runtimes.</li>
<li>Perform table maintenance regularly.</li>
<li>Perform multiple steps in a single transaction.</li>
<li>Loading data in bulk.</li>
<li>Use UNLOAD to extract large result sets.</li>
<li>Use Amazon Redshift Spectrum for ad hoc ETL processing.</li>
<li>Monitor daily ETL health using diagnostic queries.</li>
</ul>
<h2>1. COPY data from multiple, evenly sized files</h2>
<p>Amazon Redshift is an MPP (massively parallel processing) database, where all the compute nodes divide and parallelize the work of ingesting data. Each node is further subdivided into slices, with each slice having one or more dedicated cores, equally dividing the processing capacity. The number of <a href="http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html" target="_blank" rel="noopener noreferrer">slices per node</a> depends on the node type of the cluster. For example, each DS2.XLARGE compute node has two slices, whereas each DS2.8XLARGE compute node has 16 slices.</p>
<p>When you load data into Amazon Redshift, you should aim to have each slice do an equal amount of work. When you load the data from a single large file or from files split into uneven sizes, some slices do more work than others. As a result, the process runs only as fast as the slowest, or most heavily loaded, slice. In the example shown below, a single large file is loaded into a two-node cluster, resulting in only one of the nodes, “Compute-0”, performing all the data ingestion:</p>
<p><img class="alignnone size-full wp-image-4282" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/01/25/ETL1.png" alt="" width="800" height="344" /></p>
<p>When splitting your data files, ensure that they are of approximately equal size –&nbsp;between 1 MB and 1 GB after compression. The number of files should be a multiple of the number of slices in your cluster. Also, I strongly recommend that you individually compress the load files using gzip, lzop, or bzip2 to efficiently load large datasets.</p>
<p>When loading multiple files into a single table, use a single COPY command for the table, rather than multiple COPY commands. Amazon Redshift automatically parallelizes the data ingestion. Using a single COPY command to bulk load data into a table ensures optimal use of cluster resources, and quickest possible throughput.</p>
<h2>2. Use workload management to improve ETL runtimes</h2>
<p>Use Amazon Redshift’s workload management (WLM) to define multiple queues dedicated to different workloads (for example, ETL versus reporting) and to manage the runtimes of queries. As you migrate more workloads into Amazon Redshift, your ETL runtimes can become inconsistent if WLM is not appropriately set up.</p>
<p>I recommend limiting the overall concurrency of WLM across all queues to around 15 or less. This <a href="http://docs.aws.amazon.com/redshift/latest/dg/tutorial-configuring-workload-management.html" target="_blank" rel="noopener noreferrer">WLM guide</a> helps you organize and monitor the different queues for your Amazon Redshift cluster.</p>
<p>When managing different workloads on your Amazon Redshift cluster, consider the following for the queue setup:</p>
<ul>
<li>Create a queue dedicated to your ETL processes. Configure this queue with a small number of slots (5 or fewer). Amazon Redshift is designed for analytics queries, rather than transaction processing. The cost of COMMIT is relatively high, and excessive use of COMMIT can result in queries waiting for access to the commit queue. Because ETL is a commit-intensive process, having a separate queue with a small number of slots helps mitigate this issue.</li>
<li>Claim extra memory available in a queue. When executing an ETL query, you can take advantage of the <a href="http://docs.aws.amazon.com/redshift/latest/dg/r_wlm_query_slot_count.html" target="_blank" rel="noopener noreferrer">wlm_query_slot_count</a> to claim the extra memory available in a particular queue. For example, a typical ETL process might involve COPYing raw data into a staging table so that downstream ETL jobs can run transformations that calculate daily, weekly, and monthly aggregates. To speed up the COPY process (so that the downstream tasks can start in parallel sooner), the wlm_query_slot_count can be increased for this step.</li>
<li>Create a separate queue for reporting queries. Configure query monitoring rules on this queue to further manage long-running and expensive queries.</li>
<li>Take advantage of the <a href="http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-dynamic-memory-allocation.html">dynamic memory parameters</a>. They swap the memory from your ETL to your reporting queue after the ETL job has completed.</li>
</ul>
<h2>3. Perform table maintenance regularly</h2>
<p>Amazon Redshift is a columnar database, which enables fast transformations for aggregating data. Performing regular table maintenance ensures that transformation ETLs are predictable and performant. To get the best performance from your Amazon Redshift database, you must ensure that database tables regularly are VACUUMed and ANALYZEd. The <a href="https://github.com/awslabs/amazon-redshift-utils/tree/master/src/AnalyzeVacuumUtility" target="_blank" rel="noopener noreferrer">Analyze &amp; Vacuum schema utility</a> helps you automate the table maintenance task and have VACUUM &amp; ANALYZE executed in a regular fashion.</p>
<ul>
<li>Use VACUUM to sort tables and remove deleted blocks</li>
</ul>
<p>During a typical ETL refresh process, tables receive new incoming records using COPY, and unneeded data (cold data) is removed using DELETE. New rows are added to the unsorted region in a table. Deleted rows are simply marked for deletion.</p>
<p>DELETE does not automatically reclaim the space occupied by the deleted rows. Adding and removing large numbers of rows can therefore cause the unsorted region and the number of deleted blocks to grow. This can degrade the performance of queries executed against these tables.</p>
<p>After an ETL process completes, perform VACUUM to ensure that user queries execute in a consistent manner. The complete list of tables that need VACUUMing can be found using the Amazon Redshift Util’s table_info script.</p>
<p>Use the following approaches to ensure that VACCUM is completed in a timely manner:</p>
<ul>
<li>Use <a href="http://docs.aws.amazon.com/redshift/latest/dg/r_wlm_query_slot_count.html" target="_blank" rel="noopener noreferrer">wlm_query_slot_count</a> to claim all the memory allocated in the ETL WLM queue during the VACUUM process.</li>
<li>DROP or TRUNCATE intermediate or staging tables, thereby eliminating the need to VACUUM them.</li>
<li>If your table has a compound sort key with only one sort column, try to <a href="http://docs.aws.amazon.com/redshift/latest/dg/vacuum-load-in-sort-key-order.html" target="_blank" rel="noopener noreferrer">load your data in sort key order</a>. This helps reduce or eliminate the need to VACUUM the table.</li>
<li>Consider using <a href="http://docs.aws.amazon.com/redshift/latest/dg/vacuum-time-series-tables.html" target="_blank" rel="noopener noreferrer">time series</a> This helps reduce the amount of data you need to VACUUM.</li>
<li>Use ANALYZE to update database statistics</li>
</ul>
<p>Amazon Redshift uses a cost-based query planner and optimizer using statistics about tables to make good decisions about the query plan for the SQL statements. Regular statistics collection after the ETL completion ensures that user queries run fast, and that daily ETL processes are performant. The Amazon Redshift utility <a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/table_info.sql" target="_blank" rel="noopener noreferrer">table_info script</a> provides insights into the freshness of the statistics. Keeping the statistics off (pct_stats_off) less than 20% ensures effective query plans for the SQL queries.</p>
<h2>4. Perform multiple steps in a single transaction</h2>
<p>ETL transformation logic often spans multiple steps. Because commits in Amazon Redshift are expensive, if each ETL step performs a commit, multiple concurrent ETL processes can take a long time to execute.</p>
<p>To minimize the number of commits in a process, the steps in an ETL script should be surrounded by a BEGIN…END statement so that a single commit is performed only after all the transformation logic has been executed. For example, here is an example multi-step ETL script that performs one commit at the end:</p>
<div class="hide-language">
<pre><code class="lang-sql">Begin
CREATE temporary staging_table;
INSERT INTO staging_table SELECT .. FROM source (transformation logic);
DELETE FROM daily_table WHERE dataset_date =?;
INSERT INTO daily_table SELECT .. FROM staging_table (daily aggregate);
DELETE FROM weekly_table WHERE weekending_date=?;
INSERT INTO weekly_table SELECT .. FROM staging_table(weekly aggregate);
Commit</code></pre>
</div>
<h2>5. Loading data in bulk</h2>
<p>Amazon Redshift is designed to store and query petabyte-scale datasets. Using Amazon S3 you can stage and accumulate data from multiple source systems before executing a bulk COPY operation. The following methods allow efficient and fast transfer of these bulk datasets into Amazon Redshift:</p>
<ul>
<li>Use a <a href="http://docs.aws.amazon.com/redshift/latest/dg/loading-data-files-using-manifest.html" target="_blank" rel="noopener noreferrer">manifest file</a> to ingest large datasets that span multiple files. The manifest file is a JSON file that lists all the files to be loaded into Amazon Redshift. Using a manifest file ensures that <a href="http://docs.aws.amazon.com/redshift/latest/dg/managing-data-consistency.html" target="_blank" rel="noopener noreferrer">Amazon Redshift has a consistent view of the data to be loaded from S3</a>, while also ensuring that duplicate files do not result in the same data being loaded more than one time.</li>
<li>Use <a href="http://docs.aws.amazon.com/redshift/latest/dg/merge-create-staging-table.html" target="_blank" rel="noopener noreferrer">temporary staging</a> tables to hold the data for transformation. These tables are automatically dropped after the ETL session is complete. Temporary tables can be created using the CREATE TEMPORARY TABLE syntax, or by issuing a SELECT … INTO #TEMP_TABLE query. Explicitly specifying the CREATE TEMPORARY TABLE statement allows you to control the DISTRIBUTION KEY, SORT KEY, and compression settings to further improve performance.</li>
<li>User <a href="http://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE_APPEND.html" target="_blank" rel="noopener noreferrer">ALTER table APPEND</a> to swap data from the staging tables to the target table. Data in the source table is moved to matching columns in the target table. Column order doesn’t matter. After data is successfully appended to the target table, the source table is empty. ALTER TABLE APPEND is much faster than a similar CREATE TABLE AS or INSERT INTO operation because it doesn’t involve copying or moving data.</li>
</ul>
<h2>6. Use UNLOAD to extract large result sets</h2>
<p>Fetching a large number of rows using SELECT is expensive and takes a long time. When a large amount of data is fetched from the Amazon Redshift cluster, the leader node has to hold the data temporarily until the fetches are complete. Further, data is streamed out sequentially, which results in longer elapsed time. As a result, the leader node can become hot, which not only affects the SELECT that is being executed, but also throttles resources for creating execution plans and managing the overall cluster resources. Here is an example of a large SELECT statement. Notice that the leader node is doing most of the work to stream out the rows:</p>
<p><img class="alignnone size-full wp-image-4283" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/01/25/ETL2.png" alt="" width="800" height="459" /></p>
<p>Use UNLOAD to extract large results sets directly to S3. After it’s in S3, the data can be shared with multiple downstream systems. By default, UNLOAD writes data in parallel to multiple files according to the number of slices in the cluster. All the compute nodes participate to quickly offload the data into S3.</p>
<p>If you are extracting data for use with <a href="https://aws.amazon.com/redshift/spectrum/" target="_blank" rel="noopener noreferrer">Amazon Redshift Spectrum</a>, you should make use of the MAXFILESIZE parameter to and keep files are 150 MB. Similar to item 1 above, having many evenly sized files ensures that Redshift Spectrum can do the maximum amount of work in parallel.</p>
<h2>7. Use Redshift Spectrum for ad hoc ETL processing</h2>
<p>Events such as data backfill, promotional activity, and special calendar days can trigger additional data volumes that affect the data refresh times in your Amazon Redshift cluster. To help address these spikes in data volumes and throughput, I recommend staging data in S3. After data is organized in S3, Redshift Spectrum enables you to query it directly using standard SQL. In this way, you gain the benefits of additional capacity without having to resize your cluster.</p>
<p>For tips on getting started with and optimizing the use of Redshift Spectrum, see the previous post, <a href="https://aws.amazon.com/blogs/big-data/10-best-practices-for-amazon-redshift-spectrum/" target="_blank" rel="noopener noreferrer">10 Best Practices for Amazon Redshift Spectrum</a>.</p>
<h2>8. Monitor daily ETL health using diagnostic queries</h2>
<p>Monitoring the health of your ETL processes on a regular basis helps identify the early onset of performance issues before they have a significant impact on your cluster. The following monitoring scripts can be used to provide insights into the health of your ETL processes:</p>
<table border="1" cellpadding="10">
<tbody>
<tr style="background-color: #050505">
<td style="text-align: center" width="239"><span style="color: #ffffff"><strong>Script</strong></span></td>
<td style="text-align: center" width="228"><span style="color: #ffffff"><strong>Use when…</strong></span></td>
<td style="text-align: center" width="240"><span style="color: #ffffff"><strong>Solution</strong></span></td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/commit_stats.sql" target="_blank" rel="noopener noreferrer">commit_stats.sql – Commit queue statistics from past days, showing largest queue length and queue time first</a></td>
<td width="228">DML statements such as INSERT/UPDATE/COPY/DELETE operations take several times longer to execute when multiple of these operations are in progress</td>
<td width="240">Set up separate WLM queues for the ETL process and limit the concurrency to &lt; 5.</td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/copy_performance.sql" target="_blank" rel="noopener noreferrer">copy_performance.sql –&nbsp; Copy command statistics for the past days </a></td>
<td width="228">Daily COPY operations take longer to execute</td>
<td width="240">• Follow the <a href="http://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-S3.html" target="_blank" rel="noopener noreferrer">best practices for the COPY command</a>.<br /> • Analyze data growth with the incoming datasets and consider cluster resize to meet the expected SLA.</td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/table_info.sql" target="_blank" rel="noopener noreferrer">table_info.sql – Table skew and unsorted statistics along with storage and key information</a></td>
<td width="228">Transformation steps take longer to execute</td>
<td width="240">• Set up regular VACCUM jobs to address unsorted rows and claim the deleted blocks so that transformation SQL execute optimally.<br /> • Consider a <a href="http://docs.aws.amazon.com/redshift/latest/dg/c_designing-tables-best-practices.html" target="_blank" rel="noopener noreferrer">table redesign</a> to avoid data skewness.</td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_check_transaction_locks.sql" target="_blank" rel="noopener noreferrer">v_check_transaction_locks.sql – Monitor transaction locks</a></td>
<td width="228">INSERT/UPDATE/COPY/DELETE operations on particular tables do not respond back in timely manner, compared to when run after the ETL</td>
<td width="240">Multiple DML statements are operating on the same target table at the same moment from different transactions. Set up ETL job dependency so that they execute serially for the same target table.</td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_get_schema_priv_by_user.sql" target="_blank" rel="noopener noreferrer">v_get_schema_priv_by_user.sql – Get the schema that the user has access</a> to</td>
<td width="228">Reporting users can view intermediate tables</td>
<td width="240">Set up separate database groups for reporting and ETL users, and grants access to objects using <a href="http://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html" target="_blank" rel="noopener noreferrer">GRANT</a>.</td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_tbl_ddl.sql">v_generate_tbl_ddl.sql – Get the table DDL</a></td>
<td width="228">You need to create an empty table with same structure as target table for data backfill</td>
<td width="240">Generate DDL using this script for data backfill.</td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_space_used_per_tbl.sql" target="_blank" rel="noopener noreferrer">v_space_used_per_tbl.sql – monitor space used by individual tables</a></td>
<td width="228">Amazon Redshift data warehouse space growth is trending upwards more than normal</td>
<td width="240"> <p>Analyze the individual tables that are growing at higher rate than normal. Consider data archival using UNLOAD to S3 and Redshift Spectrum for later analysis.</p> <p>Use <a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/unscanned_table_summary.sql" target="_blank" rel="noopener noreferrer">unscanned_table_summary.sql</a> to find unused table and archive or drop them.</p></td>
</tr>
<tr>
<td width="239"><a href="https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/top_queries.sql" target="_blank" rel="noopener noreferrer">top_queries.sql – Return the top 50 time consuming statements aggregated by its text</a></td>
<td width="228">ETL transformations are taking longer to execute</td>
<td width="240">Analyze the top transformation SQL and use <a href="http://docs.aws.amazon.com/redshift/latest/dg/t_explain_plan_example.html">EXPLAIN</a> to find opportunities for tuning the query plan.</td>
</tr>
</tbody>
</table>
<p>There are several other useful scripts available in the <a href="https://github.com/awslabs/amazon-redshift-utils/tree/master/src">amazon-redshift-utils</a> repository. The <a href="https://github.com/awslabs/amazon-redshift-utils/tree/master/src/LambdaRunner" target="_blank" rel="noopener noreferrer">AWS Lambda Utility Runner</a> runs a subset of these scripts on a scheduled basis, allowing you to automate much of monitoring of your ETL processes.</p>
<h2>Example ETL process</h2>
<p>The following ETL process reinforces some of the best practices discussed in this post. Consider the following four-step daily ETL workflow where data from an RDBMS source system is staged in S3 and then loaded into Amazon Redshift. Amazon Redshift is used to calculate daily, weekly, and monthly aggregations, which are then unloaded to S3, where they can be further processed and made available for end-user reporting using a number of different tools, including Redshift Spectrum and Amazon Athena.</p>
<p><img class="alignnone size-full wp-image-4284" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/01/25/ETL3.png" alt="" width="770" height="198" /></p>
<h3>Step 1: &nbsp;Extract from the RDBMS source to a S3 bucket</h3>
<p>In this ETL process, the data extract job fetches change data every 1 hour and it is staged into multiple hourly files. For example, the staged S3 folder looks like the following:</p>
<div class="hide-language">
<pre><code class="lang-code"> [ec2-user@ip-172-81-1-52 ~]$ aws s3 ls s3://&lt;&lt;S3 Bucket&gt;&gt;/batch/2017/07/02/
2017-07-02 01:59:58 81900220 20170702T01.export.gz
2017-07-02 02:59:56 84926844 20170702T02.export.gz
2017-07-02 03:59:54 78990356 20170702T03.export.gz
…
2017-07-02 22:00:03 75966745 20170702T21.export.gz
2017-07-02 23:00:02 89199874 20170702T22.export.gz
2017-07-02 00:59:59 71161715 20170702T23.export.gz</code></pre>
</div>
<p>Organizing the data into multiple, evenly sized files enables the COPY command to ingest this data using all available resources in the Amazon Redshift cluster. Further, the files are compressed (gzipped) to further reduce COPY times.</p>
<h3>Step 2: Stage data to the Amazon Redshift table for cleansing</h3>
<p>Ingesting the data can be accomplished using a JSON-based <a href="http://docs.aws.amazon.com/redshift/latest/dg/loading-data-files-using-manifest.html" target="_blank" rel="noopener noreferrer">manifest file</a>. Using the manifest file ensures that <a href="http://docs.aws.amazon.com/redshift/latest/dg/managing-data-consistency.html" target="_blank" rel="noopener noreferrer">S3 eventual consistency</a> issues can be eliminated and also provides an opportunity to dedupe any files if needed. A sample manifest20170702.json file looks like the following:</p>
<div class="hide-language">
<pre><code class="lang-json">{
&quot;entries&quot;: [
{&quot;url&quot;:&quot; s3://&lt;&lt;S3 Bucket&gt;&gt;/batch/2017/07/02/20170702T01.export.gz&quot;, &quot;mandatory&quot;:true},
{&quot;url&quot;:&quot; s3://&lt;&lt;S3 Bucket&gt;&gt;/batch/2017/07/02/20170702T02.export.gz&quot;, &quot;mandatory&quot;:true},
…
{&quot;url&quot;:&quot; s3://&lt;&lt;S3 Bucket&gt;&gt;/batch/2017/07/02/20170702T23.export.gz&quot;, &quot;mandatory&quot;:true}
]
}</code></pre>
</div>
<p>The data can be ingested using the following command:</p>
<div class="hide-language">
<pre><code class="lang-code">SET wlm_query_slot_count TO &lt;&lt;max available concurrency in the ETL queue&gt;&gt;;
COPY stage_tbl FROM 's3:// &lt;&lt;S3 Bucket&gt;&gt;/batch/manifest20170702.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest;
</code></pre>
</div>
<p>Because the downstream ETL processes depend on this COPY command to complete, the <a href="http://docs.aws.amazon.com/redshift/latest/dg/r_wlm_query_slot_count.html" target="_blank" rel="noopener noreferrer">wlm_query_slot_count</a> is used to claim all the memory available to the queue. This helps the COPY command complete as quickly as possible.</p>
<h3>Step 3: Transform data to create daily, weekly, and monthly datasets and load into target tables</h3>
<p>Data is staged in the “stage_tbl” from where it can be transformed into the daily, weekly, and monthly aggregates and loaded into target tables. The following job illustrates a typical weekly process:</p>
<div class="hide-language">
<pre><code class="lang-sql">Begin
INSERT into ETL_LOG (..) values (..);
DELETE from weekly_tbl where dataset_week = &lt;&lt;current week&gt;&gt;;
INSERT into weekly_tbl (..)
SELECT date_trunc('week', dataset_day) AS week_begin_dataset_date, SUM(C1) AS C1, SUM(C2) AS C2
FROM stage_tbl
GROUP BY date_trunc('week', dataset_day);
INSERT into AUDIT_LOG values (..);
COMMIT;
End;
</code></pre>
</div>
<p>As shown above, multiple steps are combined into one transaction to perform a single commit, reducing contention on the commit queue.</p>
<h3>Step 4: Unload the daily dataset to populate the S3 data lake bucket</h3>
<p>The transformed results are now unloaded into another S3 bucket, where they can be further processed and made available for end-user reporting using a number of different tools, including Redshift Spectrum and Amazon Athena.</p>
<div class="hide-language">
<pre><code class="lang-code">unload ('SELECT * FROM weekly_tbl WHERE dataset_week = &lt;&lt;current week&gt;&gt;’) TO 's3:// &lt;&lt;S3 Bucket&gt;&gt;/datalake/weekly/20170526/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';</code></pre>
</div>
<h2>Summary</h2>
<p>Amazon Redshift lets you easily operate petabyte-scale data warehouses on the cloud. This post summarized the best practices for operating scalable ETL natively within Amazon Redshift. I demonstrated efficient ways to ingest and transform data, along with close monitoring. I also demonstrated the best practices being used in a typical sample ETL workload to transform the data into Amazon Redshift.</p>
<p>If you have questions or suggestions, please comment below.</p>
<hr />
<p>&nbsp;</p>
<p><img class="alignnone size-full wp-image-4599" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2018/03/14/big_data_03.png" alt="" width="800" height="16" /></p>
<p>&nbsp;</p>
<hr />
<h3>Additional Reading</h3>
<p>If you found this post useful, be sure to check out <a href="https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-techniques-for-amazon-redshift/" target="_blank" rel="noopener noreferrer">Top 10 Performance Tuning Techniques for Amazon Redshift</a> and <a href="https://aws.amazon.com/blogs/big-data/10-best-practices-for-amazon-redshift-spectrum/" target="_blank" rel="noopener noreferrer">10 Best Practices for Amazon Redshift Spectrum</a></p>
<p><img class="alignnone wp-image-2460 size-thumbnail" style="margin: 20px 0px 20px 0px;border: 1px solid #cccccc" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2017/06/16/spectrum_top_10_2-150x150.gif" alt="" width="150" height="150" /></p>
<hr />
<h3>About the Author</h3>
<p><img class="size-full wp-image-3311 alignleft" src="https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2017/10/19/thiyagu.jpg" alt="" width="100" height="113" /><a href="https://aws.amazon.com/blogs/big-data/author/thiyagu/" target="_blank" rel="noopener noreferrer">Thiyagarajan Arumugam</a> is a Big Data Solutions Architect at Amazon Web Services and designs customer architectures to process data at scale. Prior to AWS, he built data warehouse solutions at Amazon.com. In his free time, he enjoys all outdoor sports and practices the Indian classical drum mridangam.</p>
<p>&nbsp;</p>