Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Querying Data Pipeline with AWS Athena

Data Pipeline team at Demonware (Activision) has to deal with routing large amounts of data from various sources to many destinations every day.

Our team always wanted to be able to query processed data for debugging and analytical purposes, but creating large data warehouses was never our priority, since it usually happens downstream.

AWS Athena is completely serverless query service that doesn't require any infrastructure setup or complex provisioning. We just needed to save some of our data streams to AWS S3 and define a schema. Just a few simple steps, but in the end we were able to write complex SQL queries against gigabytes of data and get results in seconds.

In this presentation I want to show multiple ways to stream your data to AWS S3, explain some underlying tech, show how to define a schema and finally share some of the best practices we applied.

5.
The Problem
Now
Stream ProcessingData Warehouse
● Not an option - want to query historical data● Need raw data
● Don’t want to support complex infrastructure
● Retention is usually short

6.
Amazon Athena is an interactive query
service that makes it easy to analyze
data in Amazon S3 using standard SQL.
Athena is serverless, so there is no
infrastructure to manage, and you pay
only for the queries that you run.
AWS Athena
AWS Athena

25.
• Find good partitioning field like a date, version, user, etc.
• Update Athena with partitioning schema (use PARTITIONED BY in DDL) and
metadata
• You can create partitions manually or let Athena handle them (but that requires
certain structure)
• But there is no magic! You have to use partitioning fields in queries (like regular
fields), otherwise no partitioning is applied
Partitioning
Best practices

29.
• Use binary formats like Parquet!
• Don’t forget about compression
• Only include the columns that you need
• LIMIT is amazing!
• For more SQL optimizations look at Presto best practices
• Avoid a lot of small files:
Performance tips
Best practices

30.
Volume of data
The dilemma
Number and size
of files (buffering)
Time to index
Given certain data volume you want the number of files as less as possible with file sizes
as large as possible appear in S3 as soon as possible. It’s really hard. You have to give up
something.

31.
Possible solutions?
• Don’t give up anything! Have two separate pipelines, one with long retention
(bigger files) and another one with short retention (smaller files, fast time to
index). Cons? Double on size.
• Give up on number of files and size. But! Periodically merge small files in
background. Cons? Lots of moving parts and slower queries against fresh data.