AWS Lambda functions currently have a five minute time limit to execute and while this is not a big problem many functions, it becomes problematic when you’re executing a task that has some inherent latency. I created a function that stops all instances, create snapshots of all attached EBS volumes and starts those instances back up. This was easily feasible in my personal environment, but when you get to larger environments, the amount of time it takes to stop all instances – and back up all those volumes without hitting a CreateSnapshot limit – can easily exceed five minutes.

The solution is two-fold.

First, make sure you insert an increasing or variable sleep timer between creating snapshots. I had to do this for the CreateSnapshot limit issue.

Second, in order to shut down all your instances properly, create snapshots of volumes and start instances back up, I had to use three separate functions and chain them together through the magic of CloudWatch and SNS.

Here’s how it works:

The function will output logs in CloudWatch. When you find those logs, you’ll usually see something akin to “END RequestId” when the function has completed. You can create a metric filter in that log group that looks for “END RequestId.” Once that filter is created, you can create an alarm with it. The alarm will trigger when the metric filter has been met and, if configured to do so, it can send a notification to an SNS topic of your choice.

The SNS topic can be tied to a Lambda function and should be considered a trigger to get the next function started. Tie your CloudWatch alarm for the function that shuts down instances to the SNS topic that is tied to your backup function. Go through the same process of creating a CloudWatch metric filter with an alarm and have that alarm notify a second SNS topic.

The second SNS topic should be tied to a Lambda function that will start your instances back up again.

I recently tried to use AWS CLI to upload a folder full of files to S3 using a custom KMS key. This is possible by using the “aws s3api put-object” command, but it’s not possible using the “aws s3 sync” command. If you’re just uploading a few files, this isn’t a big deal, but the frustration grows with each extra file that needs to be uploaded.

The “s3 sync” command is a container for the s3api PUT action, so in order to use it for an entire folder (with a custom KMS key), you would need to write some kind of wrapper for it.

Otherwise, you can use one of the stock encryption keys and upload your entire folder to S3.

The downside, although it was a one-time downside, was going through many of the settings to create the AMI that I needed and to tighten security a bit the way I needed. Also, the article was written with a Mac client in mind and I run Windows.

So, with my Windows experience and with all the AWS work I’ve been doing lately, I put together a CloudFormation template to automate many of the steps. If you’re looking (more…)