Over time, we've optimized our own systems in order to make SNS and SQS available to even more customers. The goal has always been to charge less for processing the same volume of messages. We did this with the SQS Batch API in 2011 and more recently with long polling for SQS and 64KB payloads for SNS.

Today we are making SQS and SNS an even better value:

SQS API prices will decrease by 50%, to $0.50 per million API requests.

SNS API prices will decrease by 17%, to $0.50 per million API requests.

The SQS and SNS free tiers will each expand to 1 million free API requests per month, up 10x from 100K requests per month.

The new prices take effect on March 1, 2013 and are applicable in all AWS Regions with the exception of the AWS GovCloud (US).

If you'd like to learn more about SQS and SNS, check out this video from AWS re:Invent:

-- Jeff (with lots of help from Jon Turow, Senior Product Manager for SQS and SNS);

AWS Diagnostics for Microsoft Windows Server addresses a
common customer request to make the intersection between AWS and Windows Server
easier for customers to analyze and troubleshoot. For example, customers may have one setting
for their AWS security groups that allows access to certain Windows Server applications,
but inside of their Windows Server instances, the built-in Windows firewall may
deny that access. Rather than having the
customer track down the cause of the issue, the diagnostics tool will collect
and understand the relevant information from Windows Server and AWS, and suggest
troubleshooting and fixes to the customer.

The diagnostics tool can work on running
Windows Server instances. You can also attach your Windows Server EBS volumes to
an existing instance and the diagnostics tool will collect the relevant logs for
troubleshooting Windows Server from the EBS volume. In the end, we want to help customers spend
more time using, rather than troubleshooting, their deployments.

We appreciate
all the product feedback through Twitter, blogs, forums and email. We wanted to
share a few common questions that we’re hearing and explain a little more about
our plans for the service. We also invite you to join our Introduction to AWS OpsWorks
webinar on March 18, 2013 for a hands-on look at the service.

First, a little more about AWS OpsWorks. As Werner mentioned in his blog,
AWS OpsWorks is built on technology originally developed by Peritor, the
creators of Scalarium, which was acquired by AWS in 2012. We launched the
service with initial support for DevOps application modeling, control, and
automation use cases. We plan to rapidly broaden the service by adding more
layer types, support for more AWS services, and new features that make it
easier for you to control and automate your applications.

Q: Do you plan to support AWS services such as Amazon VPC, Amazon RDS, and Elastic Load Balancing?

Yes, in addition to
already supporting Amazon EC2, Amazon CloudWatch, and AWS IAM, we plan on
integrating other AWS services and allowing you to manage them directly from
AWS OpsWorks. Your feedback on the AWS forums will help us prioritize which
ones we add first. Today, though, you can use Chef recipes within AWS OpsWorks
to integrate your Stack with any AWS service. You can see an example of integrating with Amazon S3 in the
documentation walk through.

Q: What operating systems does AWS OpsWorks support?

AWS OpsWorks currently
supports Amazon Linux and Ubuntu 12.04 LTS. Your feedback on the AWS forums will help us prioritize which
additional operating systems to add.

Q: Does AWS OpsWorks support custom AMIs?

No, not at this time. AWS OpsWorks supports the ability to add
software to AMIs by defining additional operating system packages and Chef
recipes per Layer. Let us know if you need support for your use case in
the AWS forums.

Q: Does AWS OpsWorks support other configuration management
solutions such as Puppet or CFEngine?

No, not at this time. Let
us know if you need support for your use case in the AWS forums.

Q: Can AWS OpsWorks orchestrate changes using Chef
recipes after an instance has booted?

Yes. AWS OpsWorks sends lifecycle events to
distribute information about the environment, including application
deployments, new instance starts, and other information that may be important
to maintain your application's configuration. For details please read the documentation on AWS OpsWorks Lifecycle Events.

We have added three new features to the CloudWatch Monitoring Scripts for Linux. These scripts can be run in the background to periodically report system metrics to Amazon CloudWatch, where they will be stored for two weeks

When you install the scripts you can choose to report any desired combination of the following metrics:

Memory Utilization - Memory allocated by applications and the operating system, exclusive of caches and buffers, in percentages.

Memory Used - Memory allocated by applications and the operating system, exclusive of caches and buffers, in megabytes.

Memory Available - System memory available for applications and the operating system, in megabytes.

Disk Space Utilization - Disk space usage as percentages.

Disk Space Used - Disk space usage in gigabytes.

Disk Space Available - Available disk space in gigabytes.

Swap Space Utilization - Swap space usage as a percentage.

Swap Space Used - Swap space usage in megabytes.

You can measure and report on disk space for one or more mount points or directories.

Here's what's new:

IAM Role Support - The CloudWatch monitoring scripts now use AWS Identity and Access Management (IAM) roles to submit memory and disk metrics to CloudWatch.

Auto Scaling - You can now use the metrics generated by the scripts to drive scaling decisions in conjunction with EC2's Auto Scaling feature. For example, you could choose to scale up when average memory utilization reaches a predetermined percentage.

Aggregate Metrics - The scripts can now report aggregate metrics. Metrics of this type allow you to monitor memory and disk usage for multiple EC2 instances. You could, for example, monitor total memory utilization across all of your instances in a single aggregate metrics.

AWS CloudFormation gives you
an easy way to create and manage a collection of related AWS resources. You define a template (or use one of ours) and hand it over to CloudFormation. It will take care of creating all of the necessary AWS resources (a stack), in the proper order.

Today we are introducing three new features to add additional power and flexibility to CloudFormation: provisioning of EBS-Optimized Auto Scaling Groups, rolling deployments of Auto Scaling Groups, and cancellation of stack updates.

Rolling Deployments of Auto Scaling GroupsCloudFormation has the ability to make changes to an Auto Scaling Group (a variable sized collection of EC2 instances governed by scaling rules that allow the group to expand or contract in response to changing requirements).

Today's new feature allows you to perform a rolling deployment of an Auto Scaling Group within a CloudFormation stack. Instead of updating all of the instances in a group at the same time, you can now replace or modify the instances in a step-by-step fashion, with control over minimum group size during the update, the number of instances to update concurrently, and a pause time between batches of updates. The information is specified in an update policy. To learn more about update policies, check out the AWS CloudFormation User Guide.

This feature will increase availability of your application during an update.

Cancellation of Stack UpdatesYou now have the ability to cancel a stack update that's underway. This will interrupt the operation and trigger a rollback. The
cancel operation can be used in concert with update policies to automate the cancellation
and rollback of a deployment.

Here's one way that you can us cancellations and rolling deployments together:

Initiate a stack update using a template that includes a fairly generous pause time (perhaps as long as several minutes) between batches of concurrent upgrades.

Wait for the first round of updates to complete.

Validate that the new instance(s) perform as expected (this must occur within the pause time specified in the update policy).

The AWS Marketplace now supports software running on Red Hat Enterprise Linux, commonly known as RHEL. If you use RHEL on AWS, you can now find, buy, and then one-click deploy an ever-growing set of applications from top-tier software vendors.

The AWS MarketplaceAs you may know, the AWS Marketplace makes it easy for you to get started with the software packages of your choice. You don't have to worry about hardware provisioning or software installation. You simply locate the desired package, choose the location (an AWS Region) where you'd like to run it, select an EC2 instance type, and click to launch:

You can easily upgrade to new versions of RHEL as they are released, and you can purchase and leverage AWS Premium Support , backed up by Red Hat's own support program.

If you are an ISV and you ship products for RHEL, you can now list those products in the AWS Marketplace and sell them to hundreds of thousands of active AWS customers all over the world. This can help shrink your sales cycle, and we'll take care of all of the billing, disbursements, and billing for you.

In the MarketplaceThe following products run on RHEL and are now available in the AWS Marketplace:

If you are a developer or a technical leader and you want to learn more about the AWS Cloud and how to put it to use, these free one-day events are for you. After a morning keynote by a senior AWS leader, you can choose from a number of breakout sessions on topics that will include AWS service overviews, deep dives, use cases, best practices for advanced users, and architecture.

Some of the Summits will be preceded by a day of Technical Bootcamps.
We're also debuting the AWS Cloud Kata (Japanese for "Form") events as
part of the Summits, a full day of live demos of the latest cloud
computing best practices, illustrated by live demos. The Katas are
designed for Start-up CTOs, developers, engineers, architects, system
administrators, and data scientists

It’s been over six years since we launched Amazon EC2. After that launch, we’ve delivered several solutions that make it easier for you to deploy and manage applications.

Two years ago we launched AWS CloudFormation to provide an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion and AWS Elastic Beanstalk that allows users to quickly deploy and manage their applications in the AWS cloud. As our customers run more applications on AWS they are asking for more sophisticated tools to manage their AWS resources and automate how they deploy applications.

AWS OpsWorks features an integrated management experience for the entire application lifecycle including resource provisioning, configuration management, application deployment, monitoring, and access control. It will work with applications of any level of complexity and is independent of any particular architectural pattern.

AWS OpsWorks was designed to simplify the process of managing the application lifecycle without imposing arbitrary limits or forcing you to work within an overly constrained model. You have the freedom to design your application stack as you see fit.

You can use Chef Recipes to make system-level configuration changes and to install tools, utilities, libraries, and application code on the EC2 instance within your application. By making use of the AWS OpsWorks event model, you can activate the recipes of your choice at critical points in the lifecycle of your application. AWS OpsWorks has the power to install code from a wide variety of source code repositories.

There is no additional charge for AWS OpsWorks. You pay only for the AWS resources (EC2 instances, EBS volumes, and so forth) that your application uses.

AWS OpsWorks is available today and you can start using it now!

AWS OpsWorks ConceptsLet's start out by taking a look at the most important AWS OpsWorks concepts.

An AWS OpsWorks Stack contains a set of Amazon EC2 instances and instance blueprints (which OpsWorks calls Layers) that are used to launch and manage the instances. Each Stack hosts one or more Applications. Stacks also serve as a container for the other user permissions and resources associated with the Apps. A Stack can also contain references to any number of Chef Cookbooks.

Each Stack contains a number of Layers. Each Layer specifies the setup and configuration of a set of EC2 instances and related AWS resources such as EBS Volumes and Elastic IP Addresses. We've included Layers for a number of common technologies including Ruby, PHP, Node.js, HAProxy, Memcached, and MySQL. You can extend these Layers for your own use, or you can create custom Layers from scratch. You can also activate the Chef Recipes of your choice in response to specific events (Setup, Configure, Deploy, Undeploy, and Shutdown).

AWS OpsWorks installs Applications on the EC2 instances by pulling the code from one or more code repositories. You can indicate that the code is to be pulled from a Git or Subversion repository, fetched via an HTTP request, or downloaded from an Amazon S3 bucket.

After you have defined a Stack, its Layers, and its Applications, you can create EC2 instances and assign them to specific Layers. You can launch the instances manually, or you can define scaling based on load or by time. Either way, you have full control over the instance type, Availability Zone, Security Group(s), and operating system. As the instances launch, they will be configured to your specifications using the Recipes that you defined for the Layer that contains the instance.

AWS OpsWorks will monitor your instances and report metrics to Amazon CloudWatch (you can also use Ganglia if you'd like). It will automatically replaced failed instances with fully configured fresh ones.

AWS OpsWorks in ActionLet's take a quick look at the AWS OpsWorks Console. The welcome page provides you with a handy overview of the steps you'll take to get started:

Start out by adding a Stack. You have full control of the Region and the default Availability Zone. You can specify a naming scheme for the EC2 instances in the Stack and you can even select a color to help you distinguish your Stacks:

AWS OpsWorks will assign a name to each EC2 instance that it launches. You can select one of the following themes for the names:

You can add references to one or more Chef Cookbooks:

Now you can add Layers to your Stack:

You can choose one of the predefined Layer types or you can create your own custom Layer using community cookbooks for software like PostgreSQL, Solr, and more:

When you add a Layer of a predefined type, you have the opportunity to customize the settings as appropriate. For example, here's what you can customize if you choose to use the Ruby on Rails Layer:

You can add custom Chef Recipes and any additional software packages to your Layer. You can also ask for EBS Volumes or Elastic IP Addresses and you can configure RAID mount points:

You can see the built-in Chef recipes that AWS OpsWorks includes. You can also add your own Chef Recipes for use at various points in the application lifecycle. Here are the built-in Recipes included with the PHP Layer:

Here's what my Stack looks like after adding four layers (yours will look different, depending on the number and type of Layers you choose to add):

Applications are your code. You can add one or more Applications to the Stack like this. Your options (and this screen) will vary based on the type of application that you add:

With the Stack, the Layers, and the Applications defined, it is time to add EC2 instances to each Layer. As I noted earlier, you can add a fixed number of instances to a Layer or you can use time or load-based scaling, as appropriate for your application:

You can define time-based scaling in a very flexible manner. You can use the same scaling pattern for each day of the week or you can define patterns for particular days, or you can mix and match:

With everything defined, you can start all of the instances with a single click (you can also control them individually if you'd like):

Your instances will be up and running before long:

Then you can deploy your Applications to the instances:

You have the ability to control exactly which instances receive each deployment. AWS OpsWorks pulls the code from your
repository and runs the deploy recipes on each of the selected instances to
configure all the layers of your app. Here’s how it works. When you deploy an
app, you might have a recipe on your database Layer that performs a specific
configuration task, such as creating a new table. The recipes on your Layers
let you to simplify the configuration steps across all the resources in your
application with a single action.

Of course, there are times that you may need to get onto the instances. AWS OpsWorks helps here, too. You can configure SSH keys for each IAM user as well as configure which IAM users can use sudo.

At the top level of AWS OpsWorks, you can manage the entire Stack with a couple of clicks:

I hope that you have enjoyed this tour of AWS OpsWorks. I've shown you the highlights; there's even more functionality that you'll discover over time as you get to know the product.

AWS OpsWorks DemoWatch this short (5 minute) video to see a demo of AWS OpsWorks:

Getting Started
As always, we have plenty of resources to get you going:

Looking AheadToday, OpsWorks provides full control of software on EC2 instances and lets you use Chef recipes to integrate with AWS resources such as S3 and Amazon RDS. Moving forward, we plan to introduce integrated support for additional AWS resource types to simplify the management of layers that don’t require deep control. As always, your feedback will help us to prioritize our development effort.

What Do You Think?I invite you to give AWS OpsWorks a whirl and let me know what you think. Feel free to leave a comment!