Have you ever set up a temporary application environment and wished you could schedule automatic deletion of the environment rather than remembering to clean it up after you are done? If the answer is yes, then this blog post is for you.

Here is an example of setting up an AWS CloudFormation stack with a configurable TTL (time-to-live). When the TTL is up, deletion of the stack is triggered automatically. You can use this idea regardless of whether you have a single Amazon EC2 instance in the stack or a complex application environment. You can even use this idea in combination with other deployment and management services such as AWS Elastic Beanstalk or AWS OpsWorks, as long as your environment is modeled inside an AWS CloudFormation stack.

In this example, first I setup a sample application on an EC2 instance and then configure a ‘TTL’:

Configuring TTL is simple. Just schedule execution of a one-line shell script, deletestack.sh, using the ‘at’ command. The shell script uses AWS Command Line Interface to call aws cloudformation delete-stack:

Notice that the EC2 instance requires permissions to delete all of the stack resources. The permissions are granted to the EC2 instance via an IAM role. Also, notice that for the stack deletion to succeed, the IAM role needs to be the last in the order of deletion. You can ensure that the role is the last in the order of deletion by making other resources dependent on the role. Finally, as a best practice, you should grant the least possible privilege to the role. You can do this by using a finer grained policy document for the IAM role:

When you provision an Amazon EC2 instance in an AWS CloudFormation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a CreationPolicy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds.

A CreationPolicy instructs CloudFormation to wait on an instance until CloudFormation receives the specified number of signals. This policy takes effect only when CloudFormation creates the instance. Here's what a creation policy looks like:

A CreationPolicy must be associated with a resource, such as an EC2 instance or an Auto Scaling group. This association is how CloudFormation knows what resource to wait on. In the example policy, the CreationPolicy is associated with an Auto Scaling group. CloudFormation waits on the Auto Scaling group until CloudFormation receives three signals within five minutes. Because the Auto Scaling group's desired capacity is set to three, the signal count is set to three (one for each instance).

If three signals are not received after five minutes, CloudFormation immediately stops the stack creation labels the Auto Scaling group as failed to create, so make sure you specify a timeout period that gives your instances and applications enough time to be deployed.

Signaling a Resource

You can easily send signals from the instances that you're provisioning. On those instances, you should be using the cfn-init helper script in the EC2 user data script to deploy applications. After the cfn-init script, just add a command to run the cfn-signal helper script, as in the following example:

When you signal CloudFormation, you need let it know what stack and what resource you're signaling. In the example, the cfn-signal command specifies the stack that is provisioning the instance, the logical ID of the resource (AutoScalingGroup), and the region in which the stack is being created.

With the CreationPolicy attribute and the cfn-signal helper script, you can ensure that your stacks are created successfully only when your applications are successfully deployed. For more information, you can view a complete sample template in the AWS CloudFormation User Guide.

With AWS CloudFormation, you can provision the full breadth of AWS resources including Amazon EC2 instances. You provision the EC2 instances to run applications that drive your business. Here are some best practices for deploying and updating those applications on EC2 instances provisioned inside CloudFormation stacks:

When you include an EC2 instance in a CloudFormation template, use the AWS::CloudFormation::Init section to specify what application packages you want downloaded on the instance, where to download them from, where to install them, what services to start, and what commands to run after the EC2 instance is up and running. You can do the same when you specify an Auto Scaling launch configuration. Here’s a fill-in-the-blanks example:

First of all, it is declarative. You just specify the desired configuration and let CloudFormation figure out the steps to get to the desired configuration. For example, in the "sources" section you just specify the remote location to download an application tarball from and a directory on the instance where you want to install the application source. CloudFormation takes care of the precise steps to download the tarball, retry on any errors, and extract the source files after the tarball is downloaded.

The same declarative specification is supported for packages or files to be downloaded, users or groups to be created, and commands or services to be executed. If you need to invoke a script, you can simply download that script by using the "files" section and execute the script using the "commands" section.

Configurations defined in AWS::CloudFormation::Init can be grouped into units of deployments, which can be reused, ordered, and executed across instance reboots. For details and examples, see Configsets.

Unlike the application specification coded in an EC2 user data script, the application configuration specified in AWS::CloudFormation::Init is updatable. This is handy, for example, when you want to install a new version of a package, without recreating a running instance. AWS::CloudFormation::Init supports securely downloading application packages and other data.

More on the benefits. First, let’s take a quick look at the sequence of how AWS::CloudFormation::Init works:

You specify application configuration using the AWS::CloudFormation::Init section for an EC2 instance in your CloudFormation template.

You kick-off a CloudFormation stack creation using the template.

The AWS CloudFormation service starts creating a stack, including the EC2 instance.

After the EC2 instance is up and running, a CloudFormation helper script, cfn-init, is executed on the instance to configure the instance in accordance with your AWS::CloudFormation::Init template specification.*

Another CloudFormation helper script, cfn-signal, is executed on the instance to let the remote AWS CloudFormation service know the result (success/failure) of the configuration.* You can optionally have the CloudFormation service hold off on marking the EC2 instance state and the stack state “CREATE_COMPLETE” until the CloudFormation service hears a success signal for the instance. The holding-off period is specified in the template using a CreationPolicy.

*You can download the CloudFormation helper scripts for both Linux and Windows. These come preinstalled on the Linux and Windows AMIs provided by Amazon. You need to specify the commands to trigger cfn-init and cfn-signal in the EC2 user data script. Once an instance is up and running, the EC2 user data script is executed automatically for most Linux distributions and Windows.

You might want to store the application packages and data at secure locations and allow only authenticated downloads when you are configuring the EC2 instances to run the applications. Use the AWS::CloudFormation::Authentication section to specify credentials for downloading the application packages and data specified in the AWS::CloudFormation::Init section. Although AWS::CloudFormation::Authentication supports several types of authentication, we recommend using an IAM role. For an end-to-end example refer to an earlier blog post “Authenticated File Downloads with CloudFormation.”

Best Practice 3: Use CloudWatch Logs for Debugging

When you are configuring an instance using AWS::CloudFormation::Init, configuration logs are stored on the instance in the cfn-init.log file and other cfn-*.log files. These logs are helpful for debugging configuration errors. In the past, you had to SSH or RDP into EC2 instances to retrieve these log files. However, with the advent of Amazon CloudWatch Logs, you no longer have to log on to the instances. You can simply stream those logs to CloudWatch and view them in the AWS Management Console. Refer to an earlier blog post “View CloudFormation Logs in the Console” to find out how.

Best Practice 4: Use cfn-hup for updates

Once your application stack is up and running, chances are that you will update the application, apply an OS patch, or perform some other configuration update in a stack’s lifecycle. You just update the AWS::CloudFormation::Init section in your template (for example, specify a newer version of an application package), and call UpdateStack. When you do, CloudFormation updates the instance metadata in accordance with the updated template. Then the cfn-hup daemon running on the instance detects the updated metadata and reruns cfn-init to update the instance in accordance with the updated configuration. cfn-hup is one of the CloudFormation helper scripts available on both Linux and Windows.

Look for cfn-hup in some of our sample templates to find out how to configure cfn-hup.

Best Practice 5: Minimize application boot time with custom AMIs

When you are using CloudFormation, you can use anything from AMIs, user data scripts, AWS::CloudFormation::Init, or third-party configuration tools to configure EC2 instances and bootstrap applications. On one hand, AWS::CloudFormation::Init and the other configuration tools provide a great deal of flexibility and control over instance configuration. On the other hand, AMIs offers the fastest application boot times, since your desired configuration and application can be preinstalled while creating a custom AMI.

Some CloudFormation customers optimize their usage by selecting an instance configuration and application bootstrapping method based on the environment. They employ AWS::CloudFormation::Init and other tools for flexibility and control in development and test environments. When they have a desired configuration developed and tested, they create a custom AMI and use that custom AMI in their CloudFormation stacks in production. The result is faster application boot times.

This optimization requires you to maintain two different configuration methods and requires you to keep track of which AMI corresponds to which version-controlled configuration for future reference and updates. As such, this might be worth looking into only if you have a business need to boot up several homogeneous instances, on-demand, in the absolute shortest time possible.

These best practices are based on the real-world experience of our customers. Reach out to us at @AWSCloudFormer to let us know your feedback on these best practices and additional best practices that you may want to share.

With cost allocation tagging and the AWS Cost Explorer, you can see the cost of operating each of your AWS CloudFormation stacks.

Here’s how it works. AWS CloudFormation automatically tags each stack resource. For example, if you have a stack that creates an Amazon EC2 instance, AWS CloudFormation automatically tags the instance with the following key-value pairs:

aws:cloudformation:stack-name

The name of the stack, such as myTestStack.

aws:cloudformation:stack-id

The full stack ID, such as arn:aws:cloudformation:us-east-1:123456789012:stack/myTestStack/2ac98f30-5bdd-11e4-949b-50fa5262a838.

aws:cloudformation:logical-id

The logical ID of a resource that is defined in the stack template, such as myInstance.

To obtain the costs by stack, all you do is set up a billing report to include the AWS CloudFormation tags. Then you can filter your report in the AWS Cost Explorer to see the costs of items tagged with a specific stack name, stack ID, or logical ID. With Cost Explorer, you can see the costs associated with one or more stacks or view how much of a stack's cost is from a particular service, such as Amazon EC2 or Amazon RDS.

Click Verify to ensure that your bucket exists and has the required permissions.
You can use the AWS sample bucket policy to set the appropriate permissions. Copy and paste the sample policy into your bucket's policy.

Select the Detailed billing report with resources and tags, and then click Save preferences.

With this report, you can view a detailed bill with the report tags that you have included. Later, you will add AWS CloudFormation tags so that you can view costs for each AWS CloudFormation stack.

Note: The current month's data will be available for viewing in about 24 hours.

Configuring cost allocation tags

Under the Report section, click Manage report tags.

Select the AWS CloudFormation tags and then click Save.

Your billing report will now include three additional columns for each AWS CloudFormation tag. For example, if you created a stack named myTestStack, all resources in that stack will have the value myTestStack for the aws:cloudformation:stack-name column.

Analyzing costs in Cost Explorer

From your billing dashboard, click Cost Explorer and then click Launch Cost Explorer.Note: If you just enabled reporting, data will be available for viewing in about 24 hours.

Select the Tags filter to view billing information about a particular stack or resource.

Select an AWS CloudFormation tag key to refine the filter.

Select the aws:cloudformation:stack-id or stack-name tag to view information about a particular stack.

Select the aws:cloudformation:logical-id tag to view information about a specific resource

Select one or more values for the tag key that you selected, and then click Apply.
Cost Explorer displays billing information for your selected stacks or resources. For example, the following graph shows Amazon EC2 and Amazon RDS costs for a particular stack.

With these few simple steps, you can start analyzing the costs of your stacks and resources. Also, to help you estimate costs before you create a stack, you can use the AWS Simple Monthly Calculator. When you use the AWS CloudFormation console to create a stack, the create stack wizard provides a link to the calculator.

When you create AWS CloudFormation templates, you might find that you're continually describing the same set of resources in different templates. However, instead of repeatedly adding them to each of your templates, consider using nested stacks.

What are Nested Stacks

With nested stacks, you can link to a template from within any other template. You simply create a separate template for the resources that you want to reuse and then save that template in an Amazon S3 bucket. Whenever you want to add those resources in another template, use the AWS::CloudFormation::Stack resource to specify the S3 URL of the nested template.

When to Use Nested Stacks

An example will show when and how nested stacks are useful. Imagine that you have two websites. Their templates are identical except for their backend database, as shown in the following figure:

Instead of describing the load balancer and autoscaling group in both templates, you can create a separate template and link to it, as shown in the following figure:

When you use the Website1 template to create a stack, CloudFormation creates two stacks (the Website1 stack and a nested Frontend stack). CloudFormation treats the nested stack as a resource of the parent stack. For example, if you update or delete the Website1 stack, CloudFormation updates or deletes the nested Frontend stack.

If you use the Website2 template to create a stack, CloudFormation creates another two stacks (the Website2 stack and a nested Frontend stack). Although the Website1 and Website2 stacks link to the same template, CloudFormation creates a new nested stack for each website (the nested stacks aren't shared).

You can also customize each nested stack. For example, if the website stacks required different configurations for their load balancers and autoscaling groups, you can use input parameters to customize those resources.

When you're deciding to use nested stacks, consider how much customization you need to do. The more customization each resource requires the less beneficial nested templates become. However, if you can easily reuse a template pattern without too much customization, you can use a nested stack.

Why Use Nested Stacks

Assume that you wanted to use the Frontend resources to create more websites. You can easily reuse the Frontend template by including it as a nested stack. You don't need to manually add front-end resources to every new website template that you create.

In addition to being more efficient, nested stacks make assigning ownership to stack resources easier. Because nested stacks are separate templates, you can have separate owners maintain each template. For example, the owners of Website1 and Website2 don't need to worry about maintaining the load balancer and autoscaling group. They just nest the Fontend template in their website templates.

Meanwhile, the owners of the front-end resources can make changes to their template, such as increasing the desired capacity size of the autoscaling group, without interfering or modifying anyone else's template. And stacks that are using the Frontend template will get the changes the next time the stacks are updated. In other words, you can take advantage of role specialization, letting the experts own and make changes to the resources that they understand.

Anytime you see a pattern in multiple templates, look to see if you can use nested stacks. Nesting makes your templates easier to reuse and assign ownership. Also, just like any other stack, you can send inputs to and get output from the nested stacks. For sample templates, see Stack Resource Snippets in the AWS CloudFormation User Guide.

When you delete a stack, by default AWS CloudFormation deletes all stack resources so that you aren't left with any strays. This also means any data that you have stored in your stack are also deleted (unless you take manual snapshots). For example, data stored in Amazon EC2 volumes or Amazon RDS database instances are deleted.

But what if you want to retain your data when you or someone else deletes your stack? Maybe you want to migrate your data to another stack or maybe you want to prevent your data from being unintentionally deleted. If that's the case, you can have CloudFormation automatically retain resources or take snapshots of your database resources. Doing so preserves your data even if your stack is deleted.

To retain a resource or to create a snapshot when a stack is deleted, specify a DeletionPolicy for the corresponding resource in your CloudFormation template. Describe the resource like you normally would and just add the DeletionPolicy attribute, as shown in the following example:

You can specify retain with any resource, but you can only create snapshots of resources that support snapshots, such as the AWS::EC2::Volume, AWS::RDS::DBInstance, and AWS::Redshift::Cluster resources.

If you launch a stack with this template snippet, CloudFormation creates an Amazon S3 bucket just like any other bucket. However, when the stack is deleted, CloudFormation deletes the stack and all stack resources except for the bucket. The bucket is still available, but you'll need to use the S3 service to work with the bucket, not CloudFormation. Note that you'll still be charged for any costs that are associated with the bucket.

For resources with a snapshot DeletionPolicy, the behavior is a little bit different because the resource is deleted. However, CloudFormation creates a snapshot of that resource before deleting it.

For instance, imagine that you launched a stack with the following template snippet, which has a snapshot DeletionPolicy associated with an RDS database:

When you delete the stack, CloudFormation creates a snapshot of the database instance and then deletes the stack and all stack resources. The snapshot won't show up in CloudFormation; you'll need to use the RDS service to work with the snapshot. The name of the snapshot will include the stack name, the logical ID of the database instance, and other identifying information. You could also retain the database instance if you wanted to keep it up and running. But, depending on your goal, it might be more cost effective to create a snapshot. Note that you'll be charged for any costs that are associated with the snapshot.

A DeletionPolicy is a great way to preserve your data after a stack is deleted. For more information, see DeletionPolicy Attribute in the AWS CloudFormation User Guide.

Invalid input for parameter values is the number one reason for stack creation failures. To make it easier to enter the correct parameter values and to improve parameter validation, the AWS CloudFormation team recently added the ability to set additional data types for parameters.

Parameter types enable CloudFormation to validate inputs earlier in the stack creation process. For example in the past if you entered an invalid key pair, you would have to wait until CloudFormation attempted to create the Amazon EC2 instance to see the problem. But now CloudFormation validates the value much earlier into the stack creation process.

This benefit is highlighted with complex infrastructure, which takes longer to deploy. A good example would be a 2-tier application with a load-balanced web application backed by an RDS database.

Parameter types also make it possible to show more intuitive user interfaces, such as a dropdown of VPC IDs, to users who use the console to create stacks.

To set parameter types in your template, add a Type element to your parameter:

"Parameters" : {
"NameOfTheParameter" : {
"Type" : "<Type Name>"
}
}

CloudFormation currently supports the following parameter types:

String – A literal string

Number – An integer or float

List<Number> – An array of integers or floats

CommaDelimitedList – An array of literal strings that are separated by commas

AWS::EC2::KeyPair::KeyName – An Amazon EC2 key pair name

AWS::EC2::SecurityGroup::Id – A security group ID

AWS::EC2::Subnet::Id – A subnet ID

AWS::EC2::VPC::Id – A VPC ID

List<AWS::EC2::VPC::Id> – An array of VPC IDs

List<AWS::EC2::SecurityGroup::Id> – An array of security group IDs

List<AWS::EC2::Subnet::Id> – An array of subnet IDs

Let’s go through an example of how you can use an EC2 key pair and a list of EC2 security group IDs to deploy an EC2 instance.

EC2 Key Pair Parameter

Using an AWS-specific type, we add the EC2 key pair parameter. The type for EC2 key pair is “AWS::EC2::KeyPair::KeyName”.

"Parameters" : {
"KeyName": {
"Description": "Name of an existing EC2 KeyPair to enable SSH access to the instance",
"Type": "AWS::EC2::KeyPair::KeyName",
"ConstraintDescription": "must be the name of an existing EC2 KeyPair."
}
}

Using the AWS custom type for the EC2 key pair validates any entered value against the existing EC2 key pairs in your account. The AWS console also displays a dropdown on the Specify Parameters form of the Create Stack wizard.

EC2 Security Group IDs

Using another AWS-specific type, let’s add a parameter that contains a list of EC2 security groups IDs. The type for adding a list of EC2 Security Group IDs is “List<AWS::EC2::SecurityGroup::Id>”.

"Parameters" : {
"SecurityGroupIds": {
"Description": "Security groups that can be used to access the EC2 instances",
"Type": "List<AWS::EC2::SecurityGroup::Id>",
"ConstraintDescription": "must be list of EC2 security group ids"
}
}

Using the AWS Custom Type for the EC2 security group IDs validates any entered value against the existing EC2 security groups in your account. The AWS console also displays a multiselect box on the Specify Parameters form of the Create Stack wizard.

Using the Parameters

You can now use the parameter types created above in the template and access them with the built-in intrinsic functions such as “Ref”. For example, the snippet below uses the EC2 key pair and EC2 security group ID parameters set up above to instantiate an EC2 instance.

(Updated on 1/23/2015 to include additional information on partner access security.)

AWS CodeDeploy is a new service that makes it easy to deploy application updates to Amazon EC2 instances. CodeDeploy is targeted at customers who manage their EC2 instances directly, instead of those who use an application management service like AWS Elastic Beanstalk or AWS OpsWorks that have their own built-in deployment features. CodeDeploy allows developers and administrators to centrally control and track their application deployments across their different development, testing, and production environments.

For a quick overview of what CodeDeploy can do, watch this introductory video. In this video, we demonstrate automatically triggering a deployment from a source code change in a GitHub repository. GitHub is a popular code management and developer collaboration tool. By connecting GitHub to CodeDeploy, you can set up an end-to-end pipeline to move your code changes from source control to your testing or production environments. The remainder of this post walks through the steps required to set up automatic deployments from GitHub to CodeDeploy.

Setting Up the Prerequisites

To start with, we’ll assume that you already have an application set up in CodeDeploy that’s successfully deploying to a set of EC2 instances. You can learn about the steps required to do this in our User Guide. To get started quickly, you can also create a sample deployment to a set of test instances through our Getting Started Wizard in the console. The steps below will use this getting started sample application, but you can also translate these actions to your own application.

Moving Your Application Into GitHub

If the application files that you want to deploy are not already in a GitHub repository, you'll need to set that up. Here’s how you can do it with the getting started sample application. First, download the application files. These examples use Linux / Unix commands.

Next, you need to create a repository on GitHub to store these application files. If you need help, you can read the GitHub documentation. Your repository can be public or private. After the GitHub repository is created, you’ll push your local application files to it.

Deploying Application Files from GitHub

Once your application files are in GitHub, you can configure CodeDeploy to pull the application bundle directly from the GitHub repository, rather than from Amazon S3. Let’s trigger a deployment from your GitHub repository using the AWS Management Console. From the Deployments page, click Create New Deployment. Select the name of your application, the target deployment group, and GitHub for the revision type. You should then see a Connect to GitHub section.

Click Connect With GitHub, and then step through the OAuth process. A few different things might happen next. First, if you are not logged into GitHub in your browser, you will be asked to log in. Next, if you haven’t already granted AWS CodeDeploy access to your GitHub repositories, you will be asked to authorize that now. Once this is done, you’ll return to the AWS Management Console and CodeDeploy will have the permissions required to access your repository. All that’s left is to fill in the Repository Name and Commit ID. The repository name will be in the format “GITHUB_USERNAME/REPOSITORY_NAME”. The commit ID will be the full SHA (a 40-digit hex number) that can be copied through the GitHub UI. You can find this information from the commit history page of your repository.

Click Deploy Now, and then monitor the deployment in the console to ensure that it succeeds. After you’re sure this is set up correctly, you can proceed with configuring the automatic deployment from GitHub.

Calling AWS CodeDeploy from GitHub

There are two service hooks that you need to configure in GitHub to set up automatic deployments. The first is the AWS CodeDeploy service hook that enables GitHub to call the CodeDeploy API. When a third party requires access to your organization's AWS resources, the recommended best practice is to use an IAM role to delegate API access to them. By allowing a partner’s AWS account to assume a role in your account, you avoid sharing long-term AWS credentials with the partner. But if the partner you want to integrate with does not yet support roles, you should create an IAM user for your application with limited permissions. We will take that approach here and use the access keys for this user when making the AWS calls from GitHub. Go to the IAM Users page in the AWS Management Console. Click Create New Users. Enter “GitHub” for the user name in the first row.

Make sure that the option to generate an access key is checked, and click Create.

On the next page, click Show User Security Credentials to show the Access Key ID and Secret Access Key for the new user. Copy these down and store them in a safe and secure location, because this screen will be your last opportunity to download the secret key.

After you have the credentials, you can close out of the wizard. Next, you need to attach a policy to the new user to give them access permissions. Click the GitHub user in the IAM Users list. On the user page, scroll down to the Permissions section, and click Attach User Policy. Select the Custom Policy option and click Select. Enter a Policy Name like “CodeDeploy-Access”, and enter the following JSON into the Policy Document. You will need to replace “us-east-1” if you are using a different region, and replace “123ACCOUNTID” with your AWS account ID that is found on your Account Settings page. This policy is crafted to give the GitHub user only the minimum permission to call the CodeDeploy service APIs required for deployment.

Click Apply Policy. Now you’re ready to configure the AWS CodeDeploy service hook on GitHub. From the home page for your GitHub repository, click on the Settings tab.

On the Settings page, click the Webhooks & Services tab. Then in the Services section, click the Add Service drop-down, and select AWS CodeDeploy. On the service hook page, enter the information needed to call CodeDeploy, including the target AWS region, application name, target deployment group, and the access key ID and secret access key from the IAM user created earlier.

After entering this information, click Add Service.

Automatically Starting Deployments from GitHub

Now, you’ll add the second GitHub service hook to enable automatic deployments. The GitHub Auto Deployment service is used to control when deployments will be initiated on repository events. Deployments can be triggered when the default branch is pushed to, or if you’re using a continuous integration service, only when test suites successfully pass.

You first need to create a GitHub personal access token for the Auto-Deployment service to trigger a repository deployment. Go to the Applications tab in the Personal Settings page for your GitHub account. In the Personal Access Tokens section, click Generate New Token. Enter “AutoDeploy” for the Token Description, uncheck all of the scope boxes, and check only the repo_deployment scope.

Click Generate token. On the next page, copy the newly generated personal access token from the list, and store it in a safe place with the AWS access keys from before. You won’t be able to access this token again.

Now you need to configure the GitHub Auto-Deployment service hook on GitHub. From the home page for your GitHub repository, click on the Settings tab. On the Settings page, click the Webhooks & Services tab. Then in the Services section, click the Add Service drop-down, and select GitHub Auto-Deployment. On the service hook page, enter the information needed to call GitHub, including the personal access token and target deployment group for CodeDeploy.

After entering this information, click Add Service.

Now you’ll want to test everything working together. From the home page of your GitHub repository, click the index.html in the file list. On the file view page, click the pencil button on the toolbar above the file content to switch into edit mode.

You can change the web page content any way you like, such as by adding new text.

When you’re done, click Commit changes. If your prior configuration is set up correctly, a new deployment should be started immediately. Switch to the Deployments page in the AWS Management Console. You should see a new deployment at the top of the list that’s in progress.

You can browse to one of the instances in the deployment group to see when it receives the new web page. To get the public address of an instance, click on the Deployment ID in the list deployments list, and then click an Instance ID in the instances list to open the EC2 console. In the properties pane of the console, you can find the Public DNS for the instance. Copy and paste that value into a web browser address bar, and you can view the home page.

Going Further

If you’d like to connect your repository to a continuous integration service for unit testing, and only want to auto-deploy when those tests pass, you can read more about deploying on a commit status from this GitHub Auto-Deployment blog post.

While this setup works great if you’re deploying a static website or a dynamic language web application, you’ll need a build step in between GitHub and CodeDeploy if your application uses a compiled language. For this, you can choose from a wide selection of partners who have integrated their continuous integration services with CodeDeploy.

To dive deeper into the AWS CodeDeploy service, you can find links to documentation, tutorials, and samples on our Developer Resources page. We’d love to hear your feedback and suggestions for the service, so please reach out to the product team through our forum.

AWS CloudFormation simplifies provisioning on AWS. You can apply software engineering best practices such as version control, code reviews, unit tests, and continuous integration to the AWS CloudFormation templates, the same way you apply those best practices to your application code.

For example, with application code, you can add descriptive comments to help you document various portions of the code. Similarly, you can add descriptive comments to resources specified in the AWS CloudFormation templates.

One way to do that is to use the metadata attribute. AWS CloudFormation supports this attribute on all types of resources. Inside the metadata, you can add any description relevant for your scenario. Here is an example template snippet:

Today Jeff Barr blogged about a new feature that gives users the ability to deploy and operate applications on existing Amazon EC2 instances and on-premises servers with AWS OpsWorks. You may know OpsWorks as a service that lets users deploy and manage applications. However OpsWorks can also perform operational tasks that simplify server management. This blog includes three examples of how to use OpsWorks to manage instances. This blog will create EC2 instances using OpsWorks, but you can also use the newly launched features to register on-premises servers or existing EC2 instances.

Example 1: Use OpsWorks to perform tasks on instances

Server administrators must often perform routine tasks on multiple instances, such as installing software updates. In the past you might have logged in with SSH to each instance and run the commands manually. With OpsWorks you can now perform these tasks on every instance with a single command as often as you like by using predefined scripts and Chef recipes. You can even have OpsWorks run your recipes automatically at key points in the instance's life cycle, such as after the instance boots or when you deploy an app. This example will show how you can run a simple shell command and get the response back on the console.

Once the recipe run has completed, you can view the results by selecting the View link under Logs. About half way down the log file you should see the output:

[2014-12-03T23:49:03+00:00] INFO: @@@
this is a test
@@@

Next steps

It’s usually a better practice to put each script you plan to run into a Chef recipe. It improves consistency and avoids incorrect results. You can easily include Bash, Python and Ruby scripts in a recipes. For example, the following recipe is basically a wrapper for a one-line Bash script:

Example 2: Manage operating system users and ssh/sudo access

It is often useful to be able to grant multiple users SSH access to an EC2 instance. However Amazon EC2 installs only one SSH key when it launches an instance. With OpsWorks, each user can have their own SSH key and you can use OpsWorks to grant SSH and sudo permissions to selected users. OpsWorks then automatically adds the users' keys to the instance's authorized_keys file. If a user no longer needs SSH access, you remove those permissions and OpsWorks automatically removes the key.

Step 1: Import users into AWS OpsWorks

Sign in to AWS OpsWorks as an administrative user or as the account owner.

Click Users on the upper right to open the Users page.

Click Import IAM Users to display the users that have not yet been imported.

Select the users you want, then click Import to OpsWorks.

Step 2: Edit user settings

On the Users page, click edit in the user's Actions column.

Enter a public SSH key for the user and give the user the corresponding private key. The public key will appear on the user's My Settings page. For more information, see Setting an IAM User's Public SSH Key. If you enable self-management, the user can specify his or her own key.

Set the user's permissions levels for the stack you created in Example 1 to include "SSH" access. You can also set permissions separately by using each stack's Permissions page.

Step 3: SSH to the instance

Click Dashboard on the upper right to open the Dashboard page.

Select the stack you created in Example 1 and navigate to Instances.

Select the instance you created in Example 1.

In the Logs section you will see the execute_recipes command that added the user and the user's public key to the instance. When this command has completed, as indicated by the green check, select the SSH button at the top of the screen to launch an SSH client. You can then sign into the instance with your username and private key.

Example 3: Archive a file to Amazon S3

There are times when you may want to archive a file, for example to investigate a problem later. This script will send a file from an instance to S3.

Step 1: Create or select an existing S3 bucket

Open the S3 console and create a new bucket or select an existing bucket to use for this example.

The sample::push-s3 recipe was included in the cookbook that you installed earlier. It gets the required information from the JSON and uses the AWS Ruby SDK to upload the file to S3.

Click Execute Recipes

Step 3: View the file in S3

The file you selected in step 2 should now be in your bucket.

These examples demonstrate three ways that OpsWorks can be used for more than software configuration. See the documentation for more information on how to manage on-premises and EC2 instances with OpsWorks.