CloudWatch2S3: an Easy Way to Get Your Logs to AWS S3

Pete CheslockMay 14, 2019

This is a guest post by engineer Amir Szekely, who’s written an awesome tool — CloudWatch2S3 — which can help you solve the long-term retention issues of Amazon CloudWatch.

AWS CloudWatch Logs is a handy service to get your logs centralized quickly, but it does have its limitations. Retaining logs for an extended period of time can get expensive. You cannot easily search logs across multiple streams. Logs are hard to export, and integration requires AWS-specific code. Sometimes it makes more sense to store logs as text files in S3. That’s not always possible with some AWS services like Lambda that write logs directly to CloudWatch Logs.

Amazon has many great blog posts about the topic and the solution. In short, they create a Kinesis Stream writing to S3. CloudWatch Logs subscriptions to export logs to the new stream are created either manually with a script or in response to CloudTrail events about new log streams. This architecture is stable and scalable, but the implementation has a few drawbacks:

Writes compressed CloudWatch JSON files to S3.

Setup is still a little manual, requiring you to create a bucket, edit permissions, modify and upload source code, and run a script to initialize.

That is why I created CloudWatch2S3 — a single CloudFormation template that sets everything up in one go while still leaving room for tweaking with parameters.

The architecture is mostly the same as Amazon’s but adds a subscription timer to remove the hard requirement on CloudTrail, and post-processing to optionally write raw log files to S3 instead of compressed CloudWatch JSON files.

Setup is simple. There is just one CloudFormation template and the default parameters should be good for most.

Go to the “Outputs” tab and note the bucket where logs will be written

That’s it!

Another feature is the ability to export logs from multiple accounts to the same bucket. To set this up, you need to set the AllowedAccounts parameter to a comma-separated list of AWS account identifiers allowed to export logs. Once you create the stack, go to the “Outputs” tab and copy the value of LogDestination. Then deploy the CloudWatch2S3-additional-account.template to the other accounts while setting LogDestination to the value previously copied.

Additionally, if you are trying to save money by exporting your logs to Amazon S3, make sure you change your retention settings in CloudWatch to purge your old logs. Otherwise, you may find that both your Amazon S3 and CloudWatch bills will continue to increase.

Now that your logs are in Amazon S3 for long term retention — setup your bucket for integration with CHAOSSEARCH to be able to hunt, search, and visualize your log and event data across months and years.