We are excited to announce support for hosting static websites on Amazon S3 at the root domain. Visitors to your website can now easily and reliably access your site from their browser without specifying “www” in the web address (e.g. “example.com”). Many customers already host static websites on Amazon S3 that are accessible via a “www” subdomain (e.g. “www.example.com”). Previously, to support root domain access, you needed to run your own web server to proxy root domain requests from browsers to your website on S3. Running a web server to proxy requests introduces additional costs, operational burden, and another potential point of failure. Now, you can take advantage of S3’s high availability and durability for both “www” and root domain addresses.

For more information on hosting your static website on Amazon S3 and support for hosting websites at the root domain, review our walkthrough in the Amazon S3 Developer Guide.

We are excited to announce that Amazon RDS is now available in the AWS GovCloud (US) region!

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. It automates time-consuming database administration tasks such as backing up your database daily, upgrading your database software, and applying security patches. In addition, Amazon RDS also enables you to provide point in time restore, and deploy highly available synchronously replicated databases, freeing you to focus on your applications and business. Amazon RDS supports the MySQL, SQL Server, and Oracle database engines in AWS GovCloud (US). Amazon RDS customers include organizations such as NASA’s Jet Propulsion Laboratory.

AWS GovCloud (US) is an AWS Region designed to allow US government agencies and customers to move more sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. To learn more, please visit our AWS GovCloud (US) home page or contact us.

We are excited to announce the availability of High Storage Instances, a new Amazon EC2 instance type optimized for applications requiring fast access to large amounts of data. Customers whose applications require high sequential read and write performance over very large data sets can take advantage of the capabilities of this new Amazon EC2 instance type. High Storage instances are especially well suited for customers who use Hadoop, data warehouses, and parallel file systems to process and analyze large data sets in the AWS cloud. High Storage instances are currently available as a single instance type, Eight Extra Large (hs1.8xlarge) and provide customers with 35 EC2 Compute Units (ECUs) of compute capacity, 117 GiB of RAM, and 48 Terabytes of storage across 24 hard disk drives. hs1.8xlarge instances are capable of delivering more than 2.4 Gigabytes per second of sequential I/O performance.

High Storage Eight Extra Large instances can be purchased as On Demand or Reserved Instances in the US East (N. Virginia) region. Support for additional regions will be added in the coming months. You can learn more about the specifications and capabilities of High Storage instances for Amazon EC2 by visiting the Amazon EC2 instance type page. Detailed pricing information is available at the EC2 pricing page

We are excited to release a developer preview of the AWS Command Line Interface, a new unified tool to manage your AWS services. With just one tool to download and configure, you will be able to control multiple AWS services from the command line and automate them through scripts. This first release supports 12 services, including Amazon EC2, Auto Scaling, Elastic Load Balancing, Amazon SQS, and Amazon SNS, with upcoming support for others. Since this is a developer preview, we are also looking for feedback from the community to help shape its design.

We're excited to introduce the AWS CloudFormation editor to simplify the authoring of CloudFormation templates. AWS CloudFormation allows you to easily create and update a collection of AWS resources through templates. The CloudFormation editor understands the template format and provides intelligent assistance as you're editing. You can use code completion with inline descriptions to quickly find and insert definition blocks for different AWS resources. Then you simply configure your specific details into the placeholders. The editor will also validate your template as you make changes to identify any problems in the file. Once you define your templates, you can use them to create and update your stack of AWS resources right from within Visual Studio and Eclipse.

We are pleased to announce support for running Amazon ElastiCache Clusters in Amazon Virtual Private Cloud (Amazon VPC). With Amazon VPC, you can define a virtual network topology and customize the network configuration to closely resemble a traditional network that you might operate in your own datacenter.

You can now take advantage of the manageability, availability and scalability benefits of Amazon ElastiCache Clusters in your own isolated network. The same functionality of Amazon ElastiCache including automatic failure detection, recovery, scaling, auto discovery, Amazon CloudWatch metrics, and software patching, are now available in Amazon VPC.

We are excited to announce the public beta of AWS Data Pipeline, a web service that helps you reliably process and move data between different AWS compute and storage services as well as on-premise data sources at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon DynamoDB, and Amazon Elastic MapReduce. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system.

We are excited to announce that AWS Elastic Beanstalk configuration files now support environment resources. This mechanism allows you to provision and configure additional AWS resources that your application needs. In tandem, we are announcing the availability of an updated PHP runtime for Elastic Beanstalk that supports all recent Elastic Beanstalk platform functionality, including configuration files as well as integration with Amazon RDS and Amazon VPC.

Environment Resources

Using configuration files, you can configure software on Amazon EC2 instances within your environment, without having to create a custom AMI. Starting today, you can use configuration files to provision and configure resources such as an Amazon DynamoDB table, an Amazon CloudWatch alarm, or an Amazon SQS queue. The resources will be provisioned inside your Elastic Beanstalk environment and will be available for use with your application. To learn more about environment resources, visit the AWS Elastic Beanstalk Developer Guide.

Amazon CloudSearch users can now take advantage of two new features to manage their search applications.

The Amazon CloudSearch rank comparison feature provides a new way to visually compare changes to rank expressions. Developers can quickly see how different rank expressions and field weights impact the sorting of search results in side-by-side windows. Field weights can be adjusted by using slider bars. Fast A/B testing and rapid iteration of rank expressions are enabled, without rebuilding the index. To learn more about comparing rank expressions watch this video.

The Amazon CloudSearch analytics reports provide insight into search effectiveness and user behavior. Metrics data can be tracked for each search domain including:

Total Searches—the total number of searches

Searches with No Results—the number of searches for which no matching documents were found

Top Searches—the most frequent searches

Frequent Searches without Results—the most frequent searches for which no matching documents were found

Top Documents—the documents that were most frequently returned in search results

Analytics reports can be viewed through the Amazon CloudSearch console and downloaded in CSV format for specific date ranges.

We are delighted to announce the immediate availability of a new feature Elastic Block Store (EBS) Snapshot Copy. EBS Snapshot Copy enables you to copy your EBS snapshots across AWS regions, thus making it easier for you to leverage multiple AWS regions and accelerate your geographical expansion, data center migration and disaster recovery.

EBS Snapshot Copy is simple to use. In the AWS Management Console, you can select the snapshot to be copied, set the destination region, and start the copy. This feature can also be accessed via an EC2 Command Line Interface or an EC2 API as described in the EBS Snapshot Copy page. The copied snapshot behaves the same as other snapshots in the destination region: it can be used to create new EBS volumes which can then be attached to an EC2 instance in the destination region.

You will be charged only for the data transferred to copy the snapshot and to store the copied snapshot at the destination region.

We also plan to launch Amazon Machine Image (AMI) Copy as a follow-up to this feature, which will enable you to copy both public and custom-created AMIs across regions.

We are excited to announce Detailed Billing Reports, a new hourly grain view of your AWS usage and charges. This detailed report enables you to better understand your AWS Bill by providing hourly usage and cost data by product and Availability Zone. In addition, consolidated billing customers can now view unblended rates and cost. This report is particularly useful for analyzing your usage of Amazon EC2 On-demand and Reserved Instances.

We are pleased to announce that Auto Scaling now uses Amazon EC2 instance status check results to help your applications run more effectively. Starting today, when an instance in an Auto Scaling group becomes unreachable and fails a status check, it will be replaced automatically.

Auto Scaling Health Checks

Whether you are running a large scale website distributed across many instances or a business application that runs on a single instance, you can now use Auto Scaling to improve the availability of your applications.

You do not need to take any action to begin using EC2 status checks in your Auto Scaling groups. Auto Scaling already incorporates these checks as part of the periodic health checks it already performs. As always, if you use an Elastic Load Balancer together with your Auto Scaling group, you can also choose to include the results of ELB health checks to maintain the health of your application.

More about Amazon EC2 Status Checks

Amazon EC2 status checks help identify problems that may impair an instance’s ability to run your applications. Status checks are the results of automated tests performed by EC2 on every running instance that detect hardware and software issues. These include loss of power or network connectivity, as well as other problems that prevent your operating system from accepting network packets. For more information about status checks, visit Monitoring the Status of Your Instances in the Amazon EC2 User Guide.

Provisioned IOPS for Amazon EBS: Provisioned IOPS is an Elastic Block Store volume type designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. You can provision a volume with up to 2,000 IOPS and up to 1TB of storage, and attach it to your Amazon EC2 instance. To enable your Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, we recommend using EBS-Optimized instances. You can learn more about Provisioned IOPS by visiting the Amazon EBS detail page.

Provisioned IOPS for Amazon RDS: Amazon RDS Provisioned IOPS storage is optimized for I/O intensive, transactional (OLTP) database workloads. Starting immediately, when you create new database instances using the AWS Management Console or the Amazon RDS APIs, you can provision from 1,000 IOPS to 10,000 IOPS with corresponding storage from 100GB to 1TB for MySQL and Oracle databases. If you are using SQL Server then the maximum IOPS you can provision is 7,000 IOPS. In the near future, we plan to provide you with an automated way to migrate existing database instances to Provisioned IOPS storage for the MySQL and Oracle database engines. To learn more and get started with Amazon RDS Provisioned IOPS, please see the Amazon RDS detail page, the Amazon RDS User Guide and the Technical FAQ.

We are pleased to announce the immediate availability of Amazon RDS Provisioned IOPS for new database instances in the Asia Pacific (Singapore) Region.

Amazon RDS Provisioned IOPS is optimized for I/O-intensive, transactional (OLTP) database workloads. We are delivering this functionality to you in two stages. Starting immediately, when you create new database instances using the AWS Management Console or the Amazon RDS APIs, you can provision from 1,000 IOPS to 10,000 IOPS with corresponding storage from 100GB to 1TB for MySQL and Oracle databases. If you are using SQL Server then the maximum IOPS you can provision is 7,000 IOPS.

In the near future, we plan to provide you with an automated way to migrate existing database instances to Provisioned IOPS storage for the MySQL and Oracle database engines. If you want to migrate an existing RDS database instance to Provisioned IOPS storage immediately, you can export the data from your existing database instance and import into a new database instance equipped with Provisioned IOPS storage.

Amazon RDS Provisioned IOPS can be used with all RDS features like Multi-AZ, Read Replicas, and Amazon Virtual Private Cloud (VPC), and with all RDS-supported database engines (MySQL, Oracle, and SQL Server). With this launch, Amazon RDS Provisioned IOPS storage is now available in all AWS Regions where Amazon RDS is available. To learn more and get started with Amazon RDS Provisioned IOPS, please see the Amazon RDS detail page, the Amazon RDS User Guide and the Technical FAQ.

As part of our continued investment to make AWS a natural fit for Windows customers, we are excited to introduce the AWS Tools for Windows PowerShell. For developers, administrators, and IT pros alike, PowerShell is becoming the tool of choice to manage Windows environments. Now you can use PowerShell to manage your AWS services too. The AWS Tools for Windows PowerShell provides over 550 AWS cmdlets that let you perform quick command line actions and craft rich automation scripts, all from within the PowerShell environment.

We are excited to announce the availability of Amazon RDS Micro DB Instances in Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC lets you provision a private, isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define.

You can now choose to launch Micro DB Instances within your own VPC. Also starting today, the AWS Free Usage Tier extends to Amazon RDS Micro DB Instances running in Amazon VPC. For more information about the AWS Free Usage Tier, please visit the AWS Free Usage Tier page.

Micro instances provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. They are well suited for development/test use cases or lower throughput applications.

We are excited to release the developer preview of the AWS SDK for Node.js. This SDK enables developers to tap into the cost-effective, scalable, and reliable AWS cloud from their Node.js applications. You can use the asynchronous, event-based JavaScript calling pattern to access data in Amazon DynamoDB and Amazon S3, to control Amazon EC2 instances, and to participate in Amazon SWF workflows. Since this is a developer preview release, we are also looking for feedback from the community to help shape the SDK design.

Amazon Web Services is pleased to announce that AWS Marketplace now supports software built on Microsoft Windows Server. Customers can quickly discover and deploy Windows software titles to the AWS Cloud, including well known business intelligence, database, and hosting solutions from software vendors like MicroStrategy, Quest Software, and Parallels. You can then 1-click deploy software on Amazon EC2 instances running Windows Server, including 2003 R2, 2008, 2008 R2, and 2012 editions of Windows Server, paying only for what you use and scaling your software up or down as needed.

If you are a Windows ISV or software reseller you can list your software in AWS Marketplace and make your solutions available to hundreds of thousands of active AWS customers around the world. AWS Marketplace handles all billing, collections, and disbursements, with software revenue deposited directly into the software vendor’s or reseller’s account.

We are excited to announce that Amazon DynamoDB is now available in the AWS GovCloud (US) region!

AWS GovCloud (US) is an AWS Region designed to allow US government agencies and customers to move more sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. To learn more, please visit our AWS GovCloud (US) home page.

To learn more about how public sector agencies are using AWS and the AWS GovCloud (US) Region, visit the Public Sector website.

We are excited to announce the availability of Auto Discovery, a new way to connect to your Amazon ElastiCache cluster. Auto Discovery enables automatic discovery of cache nodes by clients when the nodes are added to or removed from an ElastiCache cluster.

Developers today must keep track of the node endpoints of their ElastiCache clusters. Also, they must update the list of endpoints manually to handle cluster membership changes. Depending on how the client application is architected, this might require shutting down the application and restarting it, thereby resulting in downtime. With Auto Discovery, we are eliminating this complexity. You only need to know a single cluster configuration endpoint that is valid throughout the life of your ElastiCache cluster and never changes as you modify your cluster.

As before, Amazon ElastiCache remains protocol-compliant with Memcached, a widely adopted memory object caching system, so code, applications, and popular tools that you use today with existing Memcached environments will continue to work seamlessly with Auto Discovery.

To get started you will need the Amazon ElastiCache Cluster Client
which will serve as your Memcached compatible client with Auto Discovery capability. At this time we support Java. We will be providing PHP and other language support in the near future. If you would like to enhance your existing Memcached client with Auto Discovery capability please see the protocol documentation.

For more information about Auto Discovery, please see Jeff Barr’s blog post.

Amazon Web Services is pleased to announce that Citrix NetScaler and Citrix Cloudbridge are both available today for immediate purchase via AWS Marketplace. NetScaler, the company’s advanced cloud networking platform and CloudBridge, which allows enterprises to connect securely to AWS, can both be deployed directly on AWS. Together, both Citrix products enable enterprises to extend their network to AWS, making it a natural extension of their IT infrastructure and optimize deployments that are on the cloud.

Citrix NetScaler on AWS Marketplace lets customers deploy the same L4-7 services on AWS that they use on-premise, to help ensure the availability, scalability and security of large public and private clouds onto Amazon Virtual Private Cloud (VPC) within Amazon EC2.

We are excited to announce the limited preview of Amazon Redshift, a fast and powerful, fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift offers you fast query performance when analyzing virtually any size data set using the same SQL-based tools and business intelligence applications you use today. With a few clicks in the AWS Management Console, you can launch a Redshift cluster, starting with a few hundred gigabytes of data and scaling to a petabyte or more, for under $1,000 per terabyte per year.

Amazon Redshift is designed for developers or businesses that require the full features and capabilities of a relational data warehouse. It is certified by Jaspersoft and MicroStrategy, with additional business intelligence tools coming soon.

We are excited to announce that we have reduced Amazon S3 standard storage and Reduced Redundancy Storage (RRS) prices, our 24th AWS price reduction. We’re making this price reduction in all nine of our regions. With this price change, all Amazon S3 standard storage and RRS customers will see a reduction in their storage costs. We are reducing standard storage prices 24-27% in the US Standard region, with similar price reductions across all other regions. We are reducing our 0-1TB storage tier in the US Standard region to $0.095/GB, a 24% price reduction. We are making similar reductions across all of our storage tiers for storage up to 5000TBs. The new lower prices for all regions can be found on the Amazon S3 pricing page. New prices are effective December 1st and will be applied to your bill for all storage on or after this date. We are happy to pass along these savings to you as we continue to scale, innovate and drive down our costs.

AWS is pleased to announce that SAP Business Suite is now certified to run on the AWS cloud platform for production deployments. Enterprises running SAP Business Suite can now leverage the on-demand, pay as you go AWS platform to support thousands of concurrent users in production without making costly capital expenditures for their underlying infrastructure. SAP solutions now certified for production use on AWS include SAP Business Suite, SAP Hana One, SAP Business All-in-One solutions, SAP Rapid Deployment solutions, SAP Afaria and SAP Business Objects business intelligence (BI) solutions. For more information on deploying SAP solutions on AWS and the cost savings that can be achieved, technical guides, customer success stories, and AWS partner-built test drives, visit http://aws.amazon.com/sap.

We're pleased to announce that beginning today, you can subscribe SQS queues to SNS topics via the SQS console. Customers tell us that SNS enables powerful fanout scenarios, where identical messages are transmitted to multiple SQS queues. For example, if you process a piece of media with multiple passes (say, metadata, thumbnails and OCR), fanout lets you accomplish those steps in parallel. It means your media files are processed faster and with less risk of delay due to bottlenecks at any one stage.

SNS+SQS has been available for a while, and in fact SNS offers free notifications to SQS queues. But it's previously been a complicated process, requiring customers to set permissions manually. This new console functionality makes the process far quicker and easier. It is available today in all regions.

Getting started with Amazon SQS and Amazon SNS is easy with our free tiers of service. To learn more, visit the Amazon SQS page and the Amazon SNS page.

We are happy to announce that Amazon DynamoDB is now available in the South America (Sao Paulo) Region.

Amazon DynamoDB is a fully-managed NoSQL database service that provides extremely fast and predictable performance with seamless scalability. With a few clicks in the AWS Management Console, you can easily create a new DynamoDB database table, or scale your table’s request capacity to the level that you need without incurring any downtime.

We are excited to introduce cross-account API access using AWS Identity and Access Management (IAM) roles. This new feature gives you increased control and simplifies access management when managing services and resources across multiple AWS accounts. Cross-account API access allows you to delegate temporary API access to AWS services and resources within your AWS account without having to share long-term security credentials.

You can now create an IAM role under your account with a set of permissions and grant a different AWS account the ability to enable its users to assume the role. When delegated IAM users assume the role, they only have access to services and resources explicitly granted by the role’s permissions.

We are excited to announce support for Windows Server 2012 on Amazon Elastic Compute Cloud (Amazon EC2) and AWS Elastic Beanstalk.

Starting today, you can launch Windows Server 2012 EC2 instances in all AWS Regions and across all EC2 instance types. Windows Server 2012 is available at the same price on Amazon EC2 as the earlier versions of Windows Server. Eligible customers can even launch Windows Server 2012 for free under the terms of the AWS Free Usage Tier. We have published Windows Server 2012 Amazon Machine Images (AMIs) with support for Microsoft SQL Server 2012 and 2008 R2 (Express, Web and Standard Editions). To get started, you can access the AMIs via the AWS Management Console and AWS AMI Catalog.

AWS Elastic Beanstalk allows you to focus on building your application, without having to worry about the provisioning, deployment, monitoring and scaling details of your applications. Elastic Beanstalk already supports Java, PHP, Python, Ruby and Windows Server 2008 R2 based .NET applications. Starting today, Elastic Beanstalk will also support Windows Server 2012 based .NET applications. You can conveniently deploy your applications from Visual Studio or the AWS Management Console.

We are pleased to announce the immediate availability of Amazon RDS Provisioned IOPS for new database instances in the Asia Pacific (Tokyo) Region.

Amazon RDS Provisioned IOPS is optimized for I/O-intensive, transactional (OLTP) database workloads. We are delivering this functionality to you in two stages. Starting immediately, when you create new database instances using the AWS Management Console or the Amazon RDS APIs, you can provision from 1,000 IOPS to 10,000 IOPS with corresponding storage from 100GB to 1TB for MySQL and Oracle databases. If you are using SQL Server then the maximum IOPS you can provision is 7,000 IOPS.

In the near future, we plan to provide you with an automated way to migrate existing database instances to Provisioned IOPS storage for the MySQL and Oracle database engines. If you want to migrate an existing RDS database instance to Provisioned IOPS storage immediately, you can export the data from your existing database instance and import into a new database instance equipped with Provisioned IOPS storage.

In addition to Asia Pacific (Tokyo) Region, you can currently use Amazon RDS Provisioned IOPS in the US East (Northern Virginia), US West (Northern California), US West (Oregon) and EU (Ireland) Regions. We plan to add support for additional AWS Regions in the coming months.

Note that you can now use AWS CloudFormation to create Amazon RDS DB Instances with Provisioned IOPS storage. If your application’s IO needs change, you can simply modify the CloudFormation template and update your Provisioned IOPS. To learn more about Provision IOPS and CloudFormation, visit the AWS CloudFormation User Guide.

We are pleased to announce that AWS CloudFormation now supports user-defined, custom resources. AWS CloudFormation makes it easy for you to provision and configure a set of related resources that includes both AWS resources and custom resources. Once you implement a custom resource type, your CloudFormation templates can fully describe the needs of your application including resources that are not supported natively or resources that exist behind a service provider API.

Many applications require interactions with external entities such as monitoring services or firewalled data stores. Custom resources allow you to provision and configure these dependent entities from within your version-controlled AWS CloudFormation template. Under the hood, CloudFormation calls these external services on your behalf and provides them with input parameters using a simple protocol. The services can then expose outputs that you can reference from within the template. For more details about how to leverage user-defined resources in your CloudFormation templates, visit the AWS CloudFormation User Guide.

Starting today, service providers and developers can implement a custom resource that can be used within a CloudFormation template. For details about how to create user-defined resources, visit the AWS CloudFormation User Guide.

We are pleased to introduce a new storage option for Amazon S3 that enables you to utilize Amazon Glacier’s extremely low-cost storage service for data archival. Amazon Glacier stores data for as little as $0.01 per gigabyte per month, and is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. With the new Amazon Glacier storage option for Amazon S3, you can define rules to automatically archive sets of Amazon S3 objects to Amazon Glacier for even lower cost storage.

The Amazon Glacier storage option for Amazon S3 is currently available in the US-Standard, US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions. You can learn more by visiting the Amazon S3 Developer Guide or joining our Dec 12 webinar.

We are excited to announce availability of new cache node types for Amazon ElastiCache:

Micro Cache Node Type (cache.t1.micro) in all regions except Asia Pacific (Sydney). A Micro cache node type comes with 213 MB of memory and is designed for lower traffic web applications, test applications and small projects. Our customers had asked for a lower entry point to get started with caching which we’re happy to address with this launch.

Medium Cache Node Type (cache.m1.medium) in all regions except Asia Pacific (Sydney). A Medium cache node type has 3.35 GB of memory and fits between the Small and Large cache nodes types, and is ideal for workloads requiring more memory than that of a Small, and where a Large would be considered too much.

Enhanced Extra Large Cache Node Type (cache.m3.xlarge) in US East (Northern Virginia). An Enhanced Extra Large cache node type is built on the next generation AWS infrastructure and has 14.6 GB of memory.

Enhanced Double Extra Large Cache Node Type (cache.m3.2xlarge) in US East (Northern Virginia). Similar to cache.m3.xlarge, an Enhanced Double Extra Large cache node type is also built on the next generation AWS infrastructure and has 29.6 GB of memory.

Amazon ElastiCache improves the performance of your web applications by retrieving data from a fast, in-memory cache instead of relying entirely on disk-based storage. Unlike other caching mechanisms, Amazon ElastiCache is fully-managed so you don’t have to worry about maintaining your own caching infrastructure. In addition, it is Memcached-compatible, so if you have existing Memcached-enabled applications, they should work with ElastiCache without any code changes.

Customers can purchase these cache node types for On-Demand or Reserved Instance usage. For more information about the new cache node types and prices, please visit the Amazon ElastiCache page.

AWS is excited to announce its new Asia Pacific (Sydney) Region. Starting today, customers can run their applications and workloads in the new Asia Pacific (Sydney) Region to reduce latency to end-users based in Australia and New Zealand while avoiding the up-front expenses, long-term commitments, and scaling challenges associated with maintaining and operating their own infrastructure. Sydney joins Singapore and Tokyo as the third Region in Asia Pacific and as the ninth Region worldwide.

The new Sydney Region is currently available for multiple services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS). For a complete list of AWS Regions and services, visit the Global Infrastructure page. The newly launched Sydney Region is now available for any business or software developer to sign up and get started today at http://aws.amazon.com.

We are excited to announce two new features of Amazon Simple Queue Service (SQS) that we expect will both lower cost and increase performance for our customers.

SQS now supports long-polling, which reduces extraneous polling associated with empty receives. With long polling, receiving a message from an empty queue will no longer return immediately. Instead, SQS waits up to 20 seconds for a message to arrive in the queue, at which point it returns it immediately. You can enable long polling for the entire queue, or alternatively for individual polling requests.

You can get started enabling queues for long-polling using any supported version of the SQS WSDL, and you can enable long-polling for individual requests using the latest WSDL, version 2012-11-05. You can learn more about Long Polling via the SQS Developer Guide and SQS API Reference.

We are also launching an enhanced client in the AWS Java SDK. This extension of the existing AmazonSQSAsyncClient interface provides automatic batching of outgoing messages and pre-fetching of incoming messages.

This extension of the existing AmazonSQSAsyncClient interface enables easier batching of outgoing messages, and also pre-fetching of incoming messages.

To take advantage of the latest improvements to SQS, make sure you have the latest SDK libraries, which you can download at http://aws.amazon.com/code.

Getting started with Amazon SQS is easy with our free tier of service. To learn more, visit the Amazon SQS page.

As part of our on-going commitment to drive down costs and pass on those savings to you, we are lowering prices for Amazon RDS for MySQL and Oracle engines by up to 14% for On-Demand usage. We are also lowering On-Demand usage prices for Amazon ElastiCache by up to 16%. These new prices are applicable to M1 DB Instance Classes and M1 Cache Node Types in the US East (N. Virginia) and US West (Oregon) Regions.

The price reductions are effective starting November 1.

Along with these price reductions, we are also excited to announce availability of two new Instance Classes for Amazon RDS:

Medium DB Instance Class (db.m1.medium) for the MySQL, SQL Server, and Oracle database engines, in all regions. The Medium DB Instance Class fits between the Small and Large DB Instance Classes, and is ideal for workloads requiring more memory and compute capacity than that of a Small, and where a Large would be considered too powerful.

Extra Large DB Instance Class (db.m1.xlarge) for the SQL Server and Oracle database engines, in all regions. This Instance Class provides 2X the memory and compute capacity of the Large Instance Class. The Extra Large Instance Class is also optimized for use with Amazon RDS Provisioned IOPS, making it ideal for I/O intensive, transactional (OLTP) database workloads.

Customers can purchase these two new DB Instances for On-Demand or Reserved Instance usage.

Amazon Web Services is pleased to announce the publication of the article “Deploy a Microsoft SharePoint 2010 Server Farm in the AWS Cloud in 6 Simple Steps.”

Based on the previously published white paper in April, “Microsoft SharePoint Server on AWS Reference Architecture,” the article provides a step-by-step approach to the setup and deployment of the public website scenario described in the white paper. The article also includes all of the necessary resources you will need to deploy a SharePoint Server farm repeatedly, such as the easy-to-launch AWS CloudFormation templates, and a link to the “Advanced Implementation Guide,” which provides detailed instructions on how to customize and launch a fully functional, enterprise-class SharePoint Server farm to suit your requirements. All templates are available to download and customize.

Amazon CloudSearch users can now use field weighting and query time rank expressions to fine-tune their search results. Field weighting prioritizes results from certain fields to improve document ranking. For example, if you are selling books on your site, you could have matches to the title field score higher than matches in other fields. That way, searches for “Harry Potter” will rank the Harry Potter books higher than a book that happens to have a reference to “Harry Potter” in the description. Query time rank expressions enable you to personalize search results by customizing ranking for each search request. As an example you could rank camping books higher for a customer that you know has an interest in camping. You can also use query time rank expressions to rapidly iterate and test changes in how you rank search results for all users without having to rebuild your index.

For a limited time, AWS is offering a free trial program for Amazon CloudSearch that enables new users to set up fully-functional search domains that they can use for up to 30 days at no charge. The Amazon CloudSearch free trial program can be used to develop and test new search applications, migrate existing applications, or simply gain hands-on experience with Amazon CloudSearch. The free trial is available in the US East (N. Virginia) Region to users worldwide. Participation in the trial program requires an AWS account with a valid credit card.

Along with the introduction of the free trial program, we are announcing a reduction in CloudSearch search instance pricing in the US East (N. Virginia) Region. The new pricing reflects a savings of 17-19%, with small search instances available for $0.10/hour. The new pricing is effective November 1.

Our BatchGetItem API will now support strongly consistent reads. This allows customers to take advantage of the performance benefits of our BatchGetItem API while still ensuring that they are retrieving the most up-to-date information from their DynamoDB table.

Customers can now update the provisioned capacity of up to 10 tables simultaneously.

We have removed the limit on the minimum percentage change when updating your DynamoDB table's provisioned capacity.

To take advantage of the latest improvements to DynamoDB, make sure you have the latest SDK libraries, which you can download at http://aws.amazon.com/code.

Getting started with Amazon DynamoDB is easy with our free tier of service. To learn more, visit the Amazon DynamoDB Page.

The AWS SDK for PHP 2 has been completely re-written to embrace modern PHP coding patterns and better integrate with popular PHP community frameworks. The new version builds on the Guzzle HTTP framework, which provides persistent connection management and increased networking performance. It also integrates the Symfony2 EventDispatcher to give developers event-driven customization hooks. The AWS SDK for PHP 2 enables you to write faster AWS applications with less code.

We are excited to announce support for up to 2,000 IOPS per Amazon EBS Provisioned IOPS volume, doubling the previously available performance delivered from a single volume. Provisioned IOPS volumes are designed to provide predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, you can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume.

You can attach multiple Amazon EBS volumes to an Amazon EC2 instance and stripe across them to deliver thousands of IOPS to your application. To enable your Amazon EC2 instance to fully utilize the IOPS provisioned on an EBS volume, you can attach your volumes to instances that are EBS-Optimized. EBS-Optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps depending on the instance type used.

We are excited to announce that AWS Elastic Beanstalk now supports Ruby applications and Amazon Virtual Private Cloud (Amazon VPC). Elastic Beanstalk allows you to focus on building your application, without having to worry about the provisioning, deployment, monitoring, and scaling details of your Java, PHP, Python, .NET, and now Ruby applications. With Amazon VPC, Elastic Beanstalk lets you provision a private, isolated section of your application in a virtual network that you define.

Ruby Support

Elastic Beanstalk supports Ruby applications and frameworks that run on the Passenger application server. This allows your local development settings to match the Elastic Beanstalk environment so you can deploy with confidence and with minimal code changes. To get started running your Ruby applications on Elastic Beanstalk, visit the Elastic Beanstalk Developer Guide. The Developer Guide also includes walkthroughs for Ruby on Rails and Sinatra applications.

VPC Integration

Starting today, you can also run your Elastic Beanstalk environments inside existing VPCs. Using Amazon VPC, you can now easily deploy a new class of web applications on Elastic Beanstalk, including internal web applications (such as your recruiting application), web applications that connect to an on-premise database (using a VPN connection), as well as private web service backends. To learn more about deploying your Elastic Beanstalk application in a VPC, visit the Elastic Beanstalk Developer Guide.

We are excited to announce a new generation of the original Amazon EC2 instance family, second generation Standard instances (M3 instances). These new instances provide customers with the same balanced set of CPU and memory resources as first generation Standard instances (M1 instances) while providing customers with 50% more computational capability/core. M3 instances come as two instance types; extra-large (m3.xlarge) and double extra-large (m3.2xlarge), and are currently available in the US East (N. Virginia) region, starting at a Linux On-Demand price of $0.58/hr for extra-large instances. Customers can also purchase M3 Standard instances as Reserved Instances or as Spot instances.

To learn more about Amazon EC2 instance types and to find out which instance type might be useful for you, please visit the Amazon EC2 Instance type page.

Along with the introduction of the M3 Standard instance family, we are announcing a reduction in Linux On-Demand pricing for M1 Standard instances in the US East (N. Virginia) and US West (Oregon) Regions by almost 19%, with the M1 small (m1.small) instance now available for $0.065/hr. The new pricing is effective November 1.

You can see the new, lower, pricing for all M1 instances and complete pricing for M3 instances on the Amazon EC2 pricing page.

Early in 2012, we launched the AWS Storage Gateway, which allowed customers to download a virtual storage gateway from AWS, associate local data with it, and asynchronously back up that local data to AWS. Many customers are using this service. Other customers have asked us how they can use the AWS Storage Gateway to save their primary data to Amazon S3 (to enjoy more cost savings) yet retain some portion of it locally in a cache for frequently accessed data. For this use case, we’re excited to announce the immediate availability of Gateway-Cached Volumes. Gateway-Cached volumes enable you to store primary data in Amazon S3 via the standard Internet Small Computer System Interface (iSCSI). By storing your data on Amazon S3, you minimize the need to scale your local storage infrastructure.

As part of this announcement, we’ve also removed the beta tag from the AWS Storage Gateway, making the service Generally Available. You can learn more by visiting the AWS Storage Gateway User Guide or joining our Dec 5 Webinar.

We are pleased to announce that billing alerts now support the individual accounts that are linked to a consolidated bill in your organization. These individual accounts can now use billing alerts to monitor their allocated charges and set up automated email alerts to be notified when charges reach a specified threshold. You may want to use billing alerts for linked accounts if developers or projects in your organization are responsible for managing their own budgets or reducing costs. You can set up your first billing alert from a linked account in minutes. To get started, visit the
AWS billing console to enable monitoring of your charges, then set your first billing alert on your account's total AWS charges by specifying a bill threshold and an email address to notify. Once you do that, you will receive a subscription confirmation e-mail from AWS sent to each address that you provided. Click the confirmation link to complete setup. Your alert will then become active and you will receive a notification when charges exceed the threshold you chose. Within a few minutes, you will also be able to set additional billing alerts for the specific AWS services that you use.

If your organization’s paying account has already enabled monitoring, all linked account will be enabled automatically. Each linked account will be able to access only its own allocated charges, and paying accounts will continue to have access to all individual linked account charges in addition to the consolidated total. Each alert uses one
Amazon CloudWatch alarm to monitor charges and one
Amazon SNS topic to send the alert email, charged at standard rates. You can use up to 10 alarms and 1,000 e-mail notifications free each month as part of the
AWS Free Tier, and most customers will be able to use billing alerts at no additional charge. To learn more about billing alerts, visit the
billing alerts page or view
Monitor Your Estimated Charges in the
Amazon CloudWatch Developer Guide. To learn more about Consolidated Billing or to start using it, read the
Consolidated Billing page. Sincerely,
The Amazon Web Services team

We are excited to announce the availability of Micro (t1.micro) instances in Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC lets you provision a private, isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define.

Starting today, you can now choose to launch Micro instances within your own VPC. Also available today, the AWS Free Usage Tier extends to Micro instances running in Amazon VPC. For more information about the AWS Free Usage Tier, please visit the AWS Free Usage Tier page.

Micro instances provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. They are well suited for lower throughput applications and web sites that require additional compute cycles periodically. You can learn more about how you can use Micro instances and appropriate applications in the Amazon EC2 documentation.

Amazon Web Services is pleased to announce that SAP HANA, SAP’s in-memory database and platform, is certified to run on AWS and is available today for immediate purchase via AWS Marketplace. HANA One, the new on-demand deployment option of HANA, is available with pay-as-you-go pricing, for only $0.99 an hour. You can deploy SAP HANA One on the AWS Cloud in minutes, with charges appearing on your AWS bill.

We are excited to announce that AWS CloudFormation now supports Amazon Relational Database Service (Amazon RDS) DB parameter groups and provisioned IOPS for Amazon Elastic Block Store (Amazon EBS) volumes and Amazon RDS DB Instances. AWS CloudFormation provides an easy mechanism to provision and configure a set of AWS resources. Using CloudFormation text-based templates, you can automate the creation and management of resources such as Amazon RDS DB Instances and Amazon EC2 instances.

Amazon RDS DB Parameter Groups

Amazon RDS makes it easy to set up, operate, and scale MySQL, SQL Server, and Oracle relational databases in the cloud. Amazon RDS parameter groups allow you to provide custom configuration values for Amazon RDS DB Instances. AWS CloudFormation now allows you to create Amazon RDS DB parameter groups and to associate them with RDS DB Instances. Using the text-based templates, you can easily list all of the parameters and their associated values and you can share them when you collaborate with other team members. To learn more about using RDS DB parameter groups with CloudFormation, visit the user guide.

Provisioned IOPS for Amazon EBS and Amazon RDS

Amazon EBS Provisioned IOPS volumes deliver predictable, high performance for I/O intensive workloads such as database applications. Using CloudFormation, you can now create an EBS Provisioned IOPS volume with a specific volume size and volume performance, and attach the volume to Amazon EC2 instances. You can also use CloudFormation to create Amazon RDS DB Instances with specific Amazon RDS Provisioned IOPS for your I/O intensive, transactional database workloads. If your application’s IO needs change, you can simply modify the CloudFormation template and update your IOPS for both Amazon EBS and Amazon RDS. To learn more about Provision IOPS and CloudFormation, visit the user guide.

We are excited to announce a new feature that makes it easier to determine the state of your Amazon EC2 Spot Bids in the instance provisioning lifecycle. You can now see detailed information on why your Spot bid states have (or have not) changed, why your Spot instances were terminated or interrupted, and how to optimize your Spot bids to get them fulfilled. To get started, open the Amazon EC2 Management Console and follow these steps:

Click on “Spot Requests” in the Navigation pane

Refer to the new “Status” column in the “My Spot Instance Requests” pane for a short description of your bid statuses

Click on the bid you’re interested in to and see detailed “Status” information in the information pane at the bottom

We are pleased to announce that Amazon RDS for MySQL now supports “Promote Read Replica” functionality. You can now convert a MySQL Read Replica into a “standalone” DB Instance using the “Promote Read Replica” option. This option stops replication and converts the Read Replica in its existing state into a “standalone” DB Instance.

You can use this option for a number of use cases including:

Perform DDL operations: DDL operations such as creating/re-building indexes etc. could take a long time and impose significant performance penalty on your DB Instance. You can perform these operations on a Read Replica, and once the operations are complete and the updates are caught up with the Source DB Instance, you can promote the Read Replica, and point your applications to it.

Sharding embodies the "share-nothing" architecture and essentially involves breaking a larger database up into smaller databases. Common ways to split a database are: Splitting tables that are not joined in the same query onto different hosts or duplicating a table across multiple hosts and then deciding on a hashing algorithm to figure out into which host a row goes. You can create Read Replicas corresponding to each of your “shards” and promote them when you decide to convert them into “standalone” shards. You can then delete the rows or tables that belong to the other shards.

Recovery against failures: Amazon RDS provides multiple options for data recovery during failures including Multi-AZ deployments and Point in Time Recovery. With the ability to “Promote“, Read Replica can potentially be considered as an additional recovery alternative against failures. However, it is to be noted that with asynchronous replication, database writes occur on a Read Replica after they have already occurred on the Source DB Instance, and this replication “lag” can vary significantly depending on the workload. If your use case requires synchronous replication, automatic failure detection and failover, we recommend you run your DB Instance as a Multi-AZ deployment.

Please refer to the Read Replicas section of the User Guide to learn more.

We are excited to announce SSL encryption support for Amazon RDS for SQL Server. You can now use SSL to encrypt SQL Server connections between your applications and your RDS SQL Server instances. SSL support is available in all AWS regions for all SQL Server editions, including Express, Web, Standard, and Enterprise.

With the addition of SSL support, you can now secure your data both 'in transit' via SSL and 'at rest' using column level encryption. These two features combined with Amazon VPC provide you with a comprehensive approach to protecting your data and isolating your RDS SQL Server instances.

We are excited to announce that a number of new features and services are now available in the AWS GovCloud (US) region, including Amazon EC2 Cluster Compute instances, Elastic Load Balancing, Auto Scaling, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS) and Amazon CloudWatch alarms.

AWS GovCloud (US) is an AWS Region designed to allow US government agencies and customers to move more sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. To learn more, please visit our AWS GovCloud (US) home page.

To learn more about how public sector agencies are using AWS and the AWS GovCloud (US) Region, visit the Public Sector website.

We are pleased to announce that AWS Marketplace now supports IAM users and permissions, enabling you to have fine-grained control for how your IAM users buy and run AWS Marketplace software.

With today’s announcement you can control which IAM users can purchase and launch software according to their specific role in your business using familiar IAM tools, such as the AWS Management Console, CLI or IAM APIs. To make this easier, we’ve provided templates for common configurations in the policy generator. Jeff Barr explores this in detail in his latest post on the AWS Blog.

Static websites is one of the fastest growing features in the history of Amazon S3. We’re excited to announce support for web page redirects, a new feature that makes it even easier to manage changes to static websites hosted on Amazon S3.

Web page redirects enable you to change the Uniform Resource Locator (URL) of a web page on your Amazon S3 hosted website (e.g., from www.example.com/oldpage to www.example.com/newpage) without breaking links or bookmarks pointing to the old URL. Users accessing the old URL will automatically be redirected to the new one. Web page redirects also prevent the search ranking of a web page from being impacted when moved to a new URL. Search engines visiting the old URL receive a response that the web page has moved to the new URL. This enables search engines to apply the search ranking information of the old web page in building the ranking for the new one.

Amazon Simple Email Service (Amazon SES) is excited to announce the Mailbox Simulator – an easy way to test generic email responses, including bounces and complaints, without sending messages to actual recipients.

The Amazon SES Mailbox Simulator requires no setup or configuration on your side and is accessible in the sandbox or from production. Previously, you would either only see feedback from ISPs after sending production mail or have to set up test situations within your email receiving infrastructure, thus taking valuable time away from innovating your core offering. Now, you can test your entire sending pipeline without any additional effort.

We are excited to introduce AWS Elastic Beanstalk configuration files, a simple mechanism to customize the software that your application relies on. Elastic Beanstalk offers an easy way to deploy and manage Java, PHP, Python, and .NET applications on AWS. Using the configuration files, you can now easily install packages, run daemons, and configure web servers, without having to create and manage Amazon Machine Images (AMIs).

Powerful

The configuration files allow you to declaratively install packages and libraries, configure software components, and run commands on the Amazon EC2 instances that are part of your Elastic Beanstalk environment. You can also set environment variables across your fleet of EC2 instances, create users and groups, and manage agents and daemons.

Yet simple

The configuration files are YAML-based and are stored in the “.ebextensions” directory of your application version. This allows you to place these files under source control along with your application source files.

These capabilities are available to newly created Java and Python environments. Existing Java environments will continue to work as expected. However, to leverage the new capabilities, you can simply launch a new environment or follow the step-by-step guide on how to migrate your environment.

Amazon RDS support for Java

You can now easily integrate Elastic Beanstalk Java and Python environments with Amazon RDS. Visit the AWS Elastic Beanstalk Developer Guide to learn more about how to integrate Amazon RDS with Elastic Beanstalk.

We are excited to announce that starting today, the AWS Free Usage Tier will include Amazon RDS instances. With this announcement, customers can gain hands-on-experience with Amazon RDS at no-cost. Customers eligible for the AWS Free Usage tier can now use up to 750 hours per month of a single-AZ t1.micro DB instance, along with 20 GB of database storage capacity and 10 million IO requests per month. The free tier applies to Single-AZ deployments of MySQL, Oracle “Bring-Your-Own-License (BYOL)” licensing model and SQL Server Express Edition.

The expanded Free Usage Tier with Amazon RDS t1.micro instances is available today in all regions, except for AWS GovCloud. For more information about the AWS Free Usage Tier, please visit the AWS Free Usage Tier page. To get started using Amazon RDS, visit the Amazon RDS detail page.

We are excited to announce the availability of Micro instances for Amazon RDS for the Oracle database engine. The micro RDS instance is designed for test applications and low traffic applications that use Oracle databases. With the availability of the micro RDS instances, you can now run an Oracle database, starting at just $30 a month ($0.04 an hour) under the License Included model.

These instances are available in all AWS Regions. See the RDS Pricing page for more information on the On Demand and Reserved Instance pricing.

We are excited to announce that Amazon CloudFront, the easy-to-use content delivery network, has added support for the private content feature to the AWS Management Console. You can now configure your distribution to deliver private content without having to use the Amazon CloudFront API. With this addition, along with the recent addition of invalidation support, all Amazon CloudFront features can now be configured using the AWS Management Console’s simple graphical user interface.

Amazon CloudFront’s private content feature provides you greater control over who is able to download your files from Amazon CloudFront. Support for private content in the AWS Management Console includes the ability to configure settings for origin access identities to restrict access to your Amazon S3 buckets. In addition, you can now use the AWS Management Console to add trusted signers; these are AWS accounts that have permission to create signed URLs. With these additions, it is now even easier for you to use Amazon CloudFront to securely deliver important digital assets that you prefer not to make publicly available such as digital downloads, training materials, personalized documents, or media files.

We are excited to announce SQL Server 2012 support for Amazon RDS. Starting today, you can launch new RDS instances running Microsoft SQL Server 2012, in addition to SQL Server 2008 R2. SQL Server 2012 for Amazon RDS is available for multiple editions of SQL Server including Express, Web, Standard, and Enterprise.

With added support for Microsoft SQL Server 2012, Amazon RDS customers can use the new features Microsoft has introduced as part of SQL Server 2012 including improvements to manageability, performance, programmability, and security. A few of these new features are highlighted below:

Contained database – a database that is isolated from other SQL Server databases including system databases like the ‘master’ database. This simplifies the task of moving databases from one instance of SQL Server to another by removing dependencies to other SQL Server databases.

Columnstore index – a new type of index for data warehouse type queries. It can greatly reduce I/O and memory utilization on large queries.

Sequence object – an object that acts as a counter similar to SQL Server’s identity column, but it is not restricted to a single table.

User-defined roles – a new role management system in SQL Server 2012 allows users to create custom server roles.

We are excited to announce the availability of Amazon RDS Provisioned IOPS, a new high-performance storage option for the Amazon Relational Database Service (Amazon RDS). Amazon RDS makes it easy to set up, operate, and scale a MySQL, Oracle, or SQL Server database in the cloud – and now enables you to provision up to 10,000 IOPS (input/output operations per second) with 1TB of storage for your new database instances.

Amazon RDS Provisioned IOPS is optimized for I/O-intensive, transactional (OLTP) database workloads. We are delivering this functionality to you in two stages. Starting immediately, when you create new database instances using the AWS Management Console or the Amazon RDS APIs, you can provision from 1,000 IOPS to 10,000 IOPS with corresponding storage from 100GB to 1TB for MySQL and Oracle databases. If you are using SQL Server then the maximum IOPS you can provision is 7,000 IOPS.

In the near future, we plan to provide you with an automated way to migrate existing database instances to Provisioned IOPS storage for the MySQL and Oracle database engines. If you want to migrate an existing RDS database instance to Provisioned IOPS storage immediately, you can export the data from your existing database instance and import into a new database instance equipped with Provisioned IOPS storage.

Amazon RDS Provisioned IOPS can be used with all RDS features like Multi-AZ, Read Replicas, and Amazon Virtual Private Cloud (VPC), and with all RDS-supported database engines (MySQL, Oracle, and SQL Server). Amazon RDS Provisioned IOPS is immediately available for new database instances in the US East (N. Virginia), US West (N. California), and EU West (Ireland) Regions. We plan to launch in our other AWS Regions in the coming months.

Many of our customers value the network management capabilities, availability, and scalability of Amazon VPC. Today we are excited to announce three new features that provide customers with more capabilities for running their applications in Amazon VPC:

You can now use Amazon RDS for SQL Server in Amazon VPC. The same functionality of Amazon RDS including managing backups, automatic failure detection and recovery, software patching, and ease of scaling your compute capacity based on your application demand, is now available in Amazon VPC. Support for Amazon VPC is available for new DB Instances of all SQL Server editions.

Also available today, you can now create IPsec VPN connections to Amazon VPC using static routing configurations. Before today, VPN connections required the use of the Border Gateway Protocol (BGP). We now support both types of connections and are excited to announce that you can now establish connectivity from devices that do not support BGP, including Cisco ASA and Microsoft Windows Server 2008 R2. See the VPC FAQs for a list of VPN devices that we've tested with Amazon VPC.

Finally, you can now configure automatic propagation of routes from your VPN and Direct Connect links to your VPC routing tables. This feature simplifies the effort to create and maintain connectivity to Amazon VPC.

We are excited to announce the availability of Data Pump for Amazon RDS for Oracle. Oracle Data Pump provides fast data movement between Oracle databases, much faster than the original Export and Import utilities. Data Pump makes it easy to import your data into Amazon RDS (or export out of Amazon RDS) from both on-premise databases and databases running on Amazon EC2. We currently support the network mode of Data Pump where the job source is an Oracle database.

Oracle Data Pump is available immediately for new RDS for Oracle DB Instances. If you are running 11.2.0.2.v3 or 11.2.0.2.v4 DB Instances, you can upgrade to the 11.2.0.2.v5 (v5) DB Engine to use Data Pump. Additionally, you can upgrade your v3 DB Instance to v4 or v5 to use Oracle Apex and Oracle XML DB. To upgrade your databases, please follow the instructions documented in the RDS User Guide.

We are excited to announce the Reserved Instance Marketplace, an online marketplace that provides AWS customers the flexibility to sell their Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances to other businesses and organizations. Customers can also browse the Reserved Instance Marketplace to find an even wider selection of Reserved Instance term lengths and pricing options sold by other AWS customers.

Reserved Instances allow customers to lower costs by making a low, one-time payment to reserve compute capacity for a specified term, and in turn, receive a significant discount on the hourly charge for that instance. The Reserved Instance Marketplace gives you the flexibility to sell the remainder of your existing Reserved Instances as your needs change, such as wanting to move instances to a new AWS Region, changing to a new instance type, or selling capacity for projects that end before your term expires. Amazon EC2 Instances purchased on the Reserved Instance Marketplace offer the same capacity reservations as Reserved Instances purchased directly from AWS.

You can also now shop the Reserved Instance Marketplace to purchase Reserved Instances outside the standard one-year and three-year term lengths. For example, if you anticipate increased website traffic for a short period of time, or if you have remaining end-of-year budget to spend, you will be able to search for Reserved Instances with shorter duration times.

We are excited to announce that Amazon CloudFront, the easy-to-use content delivery network , has added a new edge location in Madrid, Spain. This is our first edge location in Spain and this new location will speed up the delivery of static, streaming and dynamic content to end users in and around Spain. Each new edge location helps lower latency and improves performance for your end users.

If you’re already using Amazon CloudFront, you don't need to do anything to your applications as requests are automatically routed to this location when appropriate. The Amazon CloudFront location in Madrid supports all Amazon CloudFront features including support for dynamic content, cookies, low minimum content expiration periods, live streaming to multiple devices using FMS 4.5 or Live Smooth Streaming, streaming media, private content, invalidation, and custom origins.

We are very excited to announce three new features for Amazon CloudFront, the easy-to-use content delivery service that enables you to deliver static, dynamic and streaming content.

First, Amazon CloudFront now supports delivery of dynamic content that is customized or personalized using HTTP cookies. To use this feature, you specify whether you want Amazon CloudFront to forward some or all of your cookies to your custom origin server. Amazon CloudFront then considers the forwarded cookie values when identifying a unique object in its cache. This way, your end users get both the benefit of content that is personalized just for them with a cookie and the performance benefits of Amazon CloudFront.

Second, starting today you have a new option to lower the prices you pay to deliver content out of Amazon CloudFront. By default, Amazon CloudFront minimizes end user latency by delivering content from its entire global network of edge locations. However, because we charge more where our costs are higher, this means that you pay more to deliver your content with low latency to end-users in some locations. While this is the right choice for most Amazon CloudFront customers, we heard that for some use cases cost is a more important concern than latency. So, starting today we’re giving you the option to exclude the more expensive Amazon CloudFront locations from your distributions.

By default, nothing changes: your existing distributions will continue to use the entire network of edge locations. However, if you wish, you can use the AWS Management Console or the Amazon CloudFront API to specify whether you want to deliver content only through a subset of edge locations (we call each subset a price class) where our prices are lower. Note, if you select a price class that does not include all locations, you’ll pay less, but your end users who would normally be served by the locations you exclude will experience higher latency than if your content was being served from all Amazon CloudFront locations.

Finally, we have added three new fields to your Amazon CloudFront access logs:

The result type of each HTTP(s) request (for example, cache hit/miss/error).

The cookie header in the request (if any). Logging of this field is optional.

The value of X-Amz-Cf-Id for that request. This is an encrypted string that uniquely identifies a request to help AWS troubleshoot/debug any issues.

To learn more about Amazon CloudFront and these new features, please visit the detail page for the service.

We are excited to announce that AWS Elastic Beanstalk is now available in the Asia Pacific (Singapore) region. Developers can now leverage the service in the US East (North Virginia), US West (Oregon), US West (Northern California), Asia Pacific (Tokyo), and EU (Ireland) regions.

AWS Elastic Beanstalk provides an easy way for you to quickly deploy and manage applications in the AWS cloud. You simply upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

Elastic Beanstalk currently supports Java applications running on the familiar Apache Tomcat stack, .NET applications running on IIS 7.5, Python applications and PHP applications running on the Apache HTTP Server stack. Elastic Beanstalk allows you to deploy and manage your applications using a set of tools, including the AWS Management Console, Git deployment and the eb command line interface, the AWS Toolkit for Visual Studio, and the AWS Toolkit for Eclipse. With Elastic Beanstalk, you retain full control over the AWS resources powering your application, such as Amazon EC2 instances, Elastic Load Balancing, and Auto Scaling.

There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications. To get started, visit the Elastic Beanstalk Developer Guide.

We're delighted to announce support for Cross-Origin Resource Sharing (CORS) in Amazon S3. You can now easily build web applications that use JavaScript and HTML5 to interact with resources in Amazon S3, enabling you to implement HTML5 drag and drop uploads to Amazon S3, show upload progress, or update content. Until now, you needed to run a custom proxy server between your web application and Amazon S3 to support these capabilities. A custom proxy server was required because web browsers limit the way web pages loaded from one site (e.g. http://mywebsite.com) can interact with content from another site (e.g. a location in Amazon S3 like assets.mywebsite.com.s3.amazonaws.com). Amazon S3’s support for CORS replaces the need for this custom proxy server by instructing the web browser to selectively enable these cross-site interactions.

You can use the AWS Management Console or the Amazon S3 API to configure your Amazon S3 bucket for CORS. To learn more, please refer to the Amazon S3 Developer Guide.

We are very happy to announce the availability of Amazon EC2 Cluster Compute Eight Extra Large instances in the US West (Oregon) region.

Cluster Compute Eight Extra Large (cc2.8xlarge) instances provide elastic supercomputing class performance using Intel Xeon E5 processors and high bandwidth, low latency networking. Customers use cc2.8xlarge instances for a variety of scientific, engineering and business applications, and can now run these applications on the US West Coast in addition to existing availability in the US East (N. Virginia) and EU West (Ireland) regions. Customers can launch instances immediately in a single availability zone, including Amazon Virtual Private Cloud. Customers can purchase instances as On Demand, Reserved or Spot Instances. Pricing in the US West (Oregon) region is the same as the US (N. Virginia) region.

You can find more information about Cluster Compute Instances on the Amazon EC2 Instance Types page. To get started with cluster computing on Amazon EC2, visit the Amazon EC2 User Guide. We are also providing a new
CloudFormation template that launches a 8-node cluster with MIT StarCluster and includes examples for MPI applications.

We are excited to announce Cost Allocation Reports, a new AWS billing feature that enables you to organize and track your AWS costs using tagging. Tags represent your business dimensions and are used in your cost allocation report.

As of today, you can tag resources for the following services: EC2, EBS, S3, RDS, VPC and CloudFormation.

We are excited to announce Amazon Glacier, a secure, reliable and extremely low cost storage service designed for data archiving and backup. Amazon Glacier is designed for data that is infrequently accessed, yet still important to retain for future reference, and for which retrieval times of several hours are suitable. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance. With Amazon Glacier, customers can reliably and durably store large or small amounts of data for as little as $0.01 per gigabyte per month. As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.

A few clicks in the AWS Management Console are all it takes to setup Amazon Glacier. Amazon Glacier is currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions. To learn more, please visit the Amazon Glacier detail page.

We are excited to announce that AWS Elastic Beanstalk now supports Python applications. Elastic Beanstalk is an easy and fast way to deploy and manage scalable PHP, Java, .NET, and now Python applications on AWS.

We are also introducing new features that make it easier to build Python web applications on Elastic Beanstalk.

First, you can now easily leverage Amazon Relational Database Service (Amazon RDS) database instances with your Elastic Beanstalk applications. Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud, making it a great fit for scalable web applications running on Elastic Beanstalk. To learn more about how to setup and use an Amazon RDS database instance with your application, visit "Using Amazon RDS with Python" in the Developer Guide.

Second, you can customize the Python runtime for Elastic Beanstalk using a set of declarative text files within your application. If your application contains a requirements.txt in its top level directory, Elastic Beanstalk automatically installs the dependencies using pip. Elastic Beanstalk is also introducing a new configuration mechanism that allows you to install packages from yum, run setup scripts, and set environment variables. To learn more about customizing your Python environment, visit "Customizing and Configuring a Python Container" in the Developer Guide.

These four features are available to you at no additional charge for all supported RDS for Oracle licensing models in all Regions where Amazon RDS for Oracle is available.

All of the above features are available immediately for new RDS for Oracle DB Instances launched with the 11.2.0.2.v4 (v4) DB Engine. Support for Amazon VPC is available for new DB Instances of all engine versions. The v4 DB Engine also incorporates Oracle's April and July Patch Set Updates. Customers will be able to upgrade existing v3 DB Instances to v4 within a few weeks.

We are excited to announce that the Amazon DynamoDB Console now enables Item Updating.

The Table Explorer in the DynamoDB Console already allows you to view, add, or delete items in your DynamoDB tables. Starting immediately, you can now also update or make copies of existing items in your tables. To learn more about the Amazon DynamoDB Console, please see our documentation.

Getting started with Amazon DynamoDB is easy with our free tier of service. To learn more, visit the Amazon DynamoDB page.

We are excited to announce that you can now manage your AWS Direct Connect service through the AWS Management Console. Select the AWS Direct Connect option to quickly and easily order new AWS Direct Connect connections or manage existing connections and virtual interfaces. You can also use the AWS Management Console to download router templates customized for your networking equipment.

We are also pleased to announce two new AWS Direct Connect locations at CoreSite 32 Avenue of the Americas in New York and Terremark NAP do Brasil in Sao Paulo. These locations serve the US East (Virginia) and South America (Sao Paulo) Regions.

Click here to get started with AWS Direct Connect. If you are connecting from a remote location to an AWS Direct Connect location, AWS will contact you to discuss your needs further.

We are delighted to announce new features for customers looking to run high performance databases in the cloud with the launch of Amazon EBS Provisioned IOPS and EBS-Optimized instances for Amazon EC2.

Provisioned IOPS are a new EBS volume type designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With EBS Provisioned IOPS, customers can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume. Customers can then attach multiple volumes to an Amazon EC2 instance and stripe across them to deliver thousands of IOPS to their application.

To enable Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, we’re also introducing the ability to launch selected Amazon EC2 instance types as EBS-Optimized instances. EBS-Optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second and 1,000 Megabits per second depending on the instance type used.

We are excited to announce the availability of Red Hat Enterprise Linux (RHEL) Reserved Instances, an additional way to run Red Hat Enterprise Linux on Amazon EC2. With RHEL Reserved Instances, you have the option to make a low, one-time payment to reserve compute capacity and receive a significant discount on the hourly charge for that instance. RHEL Reserved Instances are complementary to existing Red Hat Enterprise Linux On-Demand Instances and give businesses even more flexibility to reduce computing costs and enjoy easy access to subscriptions. Now customers who want Reserved Instances with RHEL no longer need to perform additional steps to rebuild On-Demand RHEL AMIs before they can use them on Reserved Instances. Reserved Instances are available for all major versions of RHEL in both 32 and 64-bit architectures, in all Regions except AWS GovCloud.

We are excited to announce the availability of High I/O Instances for Amazon EC2, a new instance type that provides very high, low latency, disk I/O performance using SSD-based local instance storage. Customers whose applications require low latency access to tens of thousands of random IOPS can take advantage of the capabilities of this new Amazon EC2 instance. High I/O instances are ideal for high performance clustered databases, and are especially well suited for NoSQL databases like Cassandra and MongoDB. Customers across a spectrum of use cases, including media streaming, gaming, mobile, and social networking, can run applications storage I/O needs even more efficiently than before, while continuing to take advantage of the low cost and elasticity of Amazon EC2.

High I/O instances are available as a single instance type, High I/O Quadruple Extra Large (hi1.xlarge), in three Availability Zones in the US East (N. Virginia) region, and in two Availability Zones in the EU West (Ireland) region. We will add support for other regions in the coming months. For customers using Microsoft Windows Server, High I/O Instances are only supported with the Microsoft Windows Server AMIs for Cluster Instance Type.

You can learn more about the I/O performance and capabilities of High I/O instances for Amazon EC2 by visiting the Amazon EC2 instance type page.

Today, we are excited to announce the availability of CloudWatch metrics for EC2 status checks. These new CloudWatch metrics let you view graphs, analyze history and set alarms on your EC2 instance’s status check results.

Free of Charge – CloudWatch metrics for EC2 status checks are free of charge with every EC2 instance and are included in EC2 Basic Monitoring. You will have up to fourteen days of status check history for all of your instances.

Automatically Enabled – CloudWatch metrics for EC2 status checks have already been enabled for all of your running instances and are automatically enabled for any new instances you launch.

CloudWatch Alarm Support – CloudWatch metrics for EC2 status checks can be used with CloudWatch alarms to automatically notify you if a status check has detected a problem with your instance (additional charges may apply for CloudWatch alarms usage.)

Amazon Simple Email Service (Amazon SES) is excited to announce support of Easy DKIM – an easy way to DKIM-sign the email you send via Amazon SES.

DomainKeys Identified Mail (DKIM) allows you to associate your domain reputation to an email message and assert that the message’s contents did not change in transit. Previously, you had to DKIM-sign your messages yourself and exclude certain headers, which could be challenging. With Easy DKIM, Amazon SES takes care of DKIM-signing your email for you, and all you have to do is add some CNAME records to your DNS and enable signing.

You can administer Easy DKIM entirely through the AWS Management Console, and, if you use Amazon Route 53, you can create the CNAME records with a few clicks of your mouse. All of this is also available via three new API actions, so you can choose whether to set Easy DKIM up via the AWS Management Console or programmatically.

We’re pleased to announce MFA-protected API access, a new feature of AWS Multi-Factor Authentication (MFA). You can now enforce MFA authentication for AWS service APIs via AWS Identity and Access Management (IAM) policies. This provides an extra layer of security over powerful operations that you designate, such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3.

We are excited to announce that AWS Elastic Beanstalk is now available in the US West (Oregon) and the US West (Northern California) regions. Developers can also leverage the service in the US East (North Virginia), Asia Pacific (Tokyo), and EU (Ireland) regions.

AWS Elastic Beanstalk provides an easy way for you to quickly deploy and manage applications in the AWS cloud. You simply upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

Elastic Beanstalk currently supports Java applications running on the familiar Apache Tomcat stack, .NET applications running on IIS 7.5, and PHP applications running on the Apache HTTP Server stack. Elastic Beanstalk allows you to deploy and manage your applications using a set of tools, including the AWS Management Console, Git deployment and the eb command line interface, the AWS Toolkit for Visual Studio, and the AWS Toolkit for Eclipse. With Elastic Beanstalk, you retain full control over the AWS resources powering your application, such as Amazon EC2 instances, Elastic Load Balancing, and Auto Scaling.

There is no additional charge for Elastic Beanstalk—you pay only for the AWS resources needed to store and run your applications. To get started, visit the AWS Elastic Beanstalk Developer Guide.

We are excited to introduce multiple IP addresses for Amazon EC2 instances in Amazon VPC. Instances in a VPC can be assigned one or more private IP addresses, each of which can be associated with its own Elastic IP address. With this feature you can host multiple websites, including SSL websites and certificates, on a single instance where each site has its own IP address. Private IP addresses and their associated Elastic IP addresses can be moved to other network interfaces or instances, assisting with application portability across instances.

The number of IP addresses that you can assign varies by instance type. Small instances can accommodate up to 8 IP addresses (across 2 elastic network interfaces) whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra Large instances can be assigned up to 240 IP addresses (across 8 elastic network interfaces). For more information about IP address and elastic network interface limits, go to Instance Families and Types in the Amazon EC2 User Guide.

With this release we are also lowering the charge for EIP addresses not associated with running instances, from $0.01 per hour to $0.005 per hour on a pro rata basis. This price reduction is applicable to EIP addresses in both Amazon EC2 and Amazon VPC and will be applied to EIP charges incurred since July 1, 2012.

We are excited to welcome eb (pronounced ee-bee) to the family of Elastic Beanstalk command line tools. Eb simplifies the development and deployment tasks from the terminal on Linux, Mac OS, and Microsoft Windows. Getting started is as easy as running eb init, eb start, and git aws.push.

Eb guides you through a few questions to configure your Elastic Beanstalk environment. If your source code is managed by Git, eb automatically configures Git to deploy to your Elastic Beanstalk environment.

Using eb, you can quickly launch new environments, deploy and test your application, and update your configuration settings. This allows for quick development and test cycles without leaving your terminal window. Eb also provides information about the status of your environment and progress during deployments.

The existing CLI that replicates the web service API continues to be supported for scripting and automated deployments.

Our most requested feature has been to provide an easier way to process bounces and complaints. Amazon Simple Email Service (Amazon SES) is thrilled to announce feedback notification via Amazon Simple Notification Service (Amazon SNS).

Now, as an alternative to parsing bounces and complaints passed back to your mailbox, you can set an Amazon SNS topic for bounce or complaint notifications by verified domain or email address and receive them in a simple JSON format. The JSON object will include the message ID of the message that bounced or caused the complaint, along with the email address that bounced or most likely complained.

You can set this up via the AWS Management Console or programmatically via API, so you can choose whichever method works best for you.

We are excited to announce the launch of our newest edge location in Sydney, Australia to serve end users of Amazon CloudFront and Amazon Route 53. This is our first edge location in Australia and with this location Amazon CloudFront and Amazon Route 53 now have a total of 33 edge locations worldwide. Each new edge location helps lower latency and improves performance for your end users. We have launched 8 new edge locations in 2012 and we plan to continue to add new edge locations worldwide.

An edge location in Australia has been frequently requested by our customers so we are excited to add this location to our global network. If you’re already using Amazon CloudFront or Amazon Route 53, you don't need to do anything to your applications as requests are automatically routed to this location when appropriate.

We are excited to announce that AWS CloudFormation now supports Amazon DynamoDB as well as Amazon CloudFront dynamic content. With AWS CloudFormation, you can easily provision and update a set of related AWS resources in an orderly and predictable fashion. AWS CloudFormation templates are JSON-formatted text files that can be version controlled alongside your application source.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can now create and update Amazon DynamoDB tables using CloudFormation templates. As the load of your application changes, you can easily adjust your Amazon DynamoDB request capacity by modifying your CloudFormation template and updating your running stacks. To learn more about how to create and manage Amazon DynamoDB tables using CloudFormation, visit the AWS CloudFormation User Guide.

Amazon CloudFront is a content delivery service that distributes content with low latency and high data transfer speeds. Using CloudFormation templates, you can now create and manage Amazon CloudFront distributions for your entire web application, including dynamic, static, and streaming content. CloudFormation also allows you to easily update your distributions as you add more content or origins to your application. To learn more about how to create and manage Amazon CloudFront distributions using CloudFormation, visit the AWS CloudFormation User Guide.

We are excited to announce a number of improvements to AWS Support that we believe will deliver more value than ever to our customers. As part of today’s announcement of new support features and lower prices, the support plan names have been changed from metallic references to those that are more in line with their anticipated customers: Developer (Bronze), Business (Gold), Enterprise (Platinum). All plans include support for an unlimited number of cases, are available worldwide, have no long term contracts and can be cancelled at any time.

Additional improvements include:

An expanded free Support tier

Lower prices on Premium tiers

Launch of the AWS Trusted Advisor Dashboard which provides customers self-service access to proactive alerts that identify opportunities to save money, improve system performance, or close security gaps

Launch of Chat for Business and Enterprise-level Customers

Expansion of Customer Service phone and email availability to anytime hours

Technical Support for Health Checks starting with Amazon EC2

Expansion of 3rd Party Software Support to include Support for Databases (MySQL, SQL Server), Disk Management tools (LVM, RAID), and VPN solutions (OpenVPN, RRAS) running on top of AWS Infrastructure Services

Increased Named Contacts from 3 to 5 for Business Customers

It is now easier than ever for customers of all sizes and technical abilities to draw on the deep technical experience of AWS Support engineers to help build and manage applications on top of AWS Infrastructure Services. To learn more about available support plans and pricing, visit the AWS Support page, or sign up.

We’re excited to announce support for running Apache HBase on Amazon Elastic MapReduce, bringing real-time data access to Hadoop in the cloud. HBase is a distributed, column-oriented data store that provides strictly consistent reads and writes, automatic sharding of tables, and efficient storage of large quantities of sparse data. It is built to work seamlessly with Hadoop, sharing its file system and serving as the input and output for MapReduce jobs run in Hadoop. In addition, HBase on EMR provides customers the ability to perform full and incremental backups to Amazon S3 with the option of guaranteed consistency.

For more information, please visit the Running HBase section of the EMR Developer's Guide.

We are excited to introduce AWS Identity and Access Management (IAM) roles for EC2 instances, a new feature that makes it even easier for your applications to securely access AWS service APIs from EC2 instances. Now you can create an IAM role, which has a set of permissions, and launch EC2 instances with the IAM role. You can launch individual EC2 instances or use Auto Scaling or AWS CloudFormation to launch a fleet of instances with IAM roles.

AWS access keys with the specified permissions are automatically made available on EC2 instances that have been launched with an IAM role. IAM roles for EC2 instances manages the muck of securely distributing your AWS access keys out to your EC2 instances so that you can focus on what matters to you most – your application.

We are excited to announce the availability of Micro instances for Amazon RDS for the MySQL database engine. Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. Customers have asked us for a lower priced instance type that could satisfy the needs of their less demanding applications. The micro RDS instance allows you to run a fully-featured relational database, starting at just $19 a month ($0.025 an hour). Micro RDS instances also support Multi-AZ deployments and Read Replicas.

The t1.micro RDS instance is designed for lower traffic web applications, test applications and small projects. Micro DB instances provide a small amount of consistent CPU resources, and also allow you to burst CPU capacity when additional cycles are available. The Micro DB instance type is available now in all AWS Regions. See the RDS pricing page for more information on the On Demand and Reserved Instance pricing.

We are excited to announce that you can now create a load balancer in your Amazon Virtual Private Cloud ("VPC") for internal load balancing. With this new feature, you can balance requests between tiers of your application using the private IP addresses of the load balancer, enabling you to use a load balancer without being required to expose it to the internet. For example, if you have a web server front-end that makes requests to application server instances, you can now place a load balancer in front of the application server instances and send the requests from the web server to the internal load balancer. You can still use the Elastic Load Balancing features such as session stickiness, SSL encryption, health checks, and instance registration and de-registration.

We are excited to announce that you can now use Amazon EC2 Spot Instances with Auto Scaling and AWS CloudFormation, making it even easier to use Spot Instances to provide significant savings on your batch computing workloads. We are also releasing a new code tutorial that showcases how you can use Amazon SNS notifications to alert on key changes in Spot pricing. These features make it even easier to use Amazon EC2 Spot Instances.

Auto Scaling You can now use Spot Instances With Auto Scaling, enabling you to scale the number of Spot Instances you run automatically based on your demand or a schedule you define. For example, you can now easily launch additional Spot Instances as your queue depth increases.

Notifications We are releasing a new code tutorial that enables you to generate and manage Amazon SNS notifications when there are changes in the state of your Amazon EC2 Instances, current Spot Instance Requests, and Spot Prices within a particular region. By leveraging this new code sample, you can now setup your applications running on Spot Instances to more easily manage potential interruptions.

All customers can now access an enhanced Billing CSV Report with similar granularity of detail as is available on the Account Activity page. For Consolidated Billing customers this report includes both Payer and Linked Account line item detail. Additionally, you can configure your account preferences to publish your CSV Report to your Amazon S3 bucket for programmatic access.

AWS is introducing a limited time free trial program to help new Amazon ElastiCache customers get started. Customers eligible for the free trial will be able to run one free small Cache Node for sixty (60) days. The free trial is now available in all Regions.

Amazon ElastiCache improves the performance of your web applications by retrieving data from a fast, in-memory cache instead of relying entirely on disk-based storage. Unlike other caching mechanisms, Amazon ElastiCache is fully-managed so you don’t have to worry about maintaining your own caching infrastructure. In addition, it is Memcached-compatible, so if you have existing Memcached-enabled applications, they should work with ElastiCache without any code changes.

We are excited to announce the availability of Hive 0.8.1 on Amazon Elastic MapReduce. This introduces a number of important new features to Hive such as binary and timestamp data types, export/import functionality, a plug-in developer kit, support for per partition SerDes, and the ability to debug Hive remotely. For more information, please visit the
Hive Version Details page in the EMR Developer Guide.

We are excited to announce the availability of Oracle Enterprise Manager 11g Database Control (OEM) for Amazon RDS for Oracle. Starting today, you can enable OEM Database Control for your DB instances with just a few clicks in the AWS Management Console.

In conjunction with OEM, we are excited to announce support for Option Groups. This feature simplifies DB administration by enabling you to save a set of options and their configurations so you can easily apply them to other DB Instances in the future. To enable OEM for your RDS for Oracle DB Instance, simply create an Option Group, add OEM to it, and apply the Option Group to your DB Instance.

OEM Database Control is available at no additional charge for new and existing Amazon RDS for Oracle DB instances and all supported Oracle Editions: Enterprise Edition, Standard Edition, and Standard Edition One.

We are excited to announce that you can now easily export EC2 instances that you previously imported with VM Import to your on-premise virtualization infrastructure. VM Import enables you to easily import virtual machine images from your existing virtualization environment to Amazon EC2 instances, allowing you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements. With VM Export, you can now export these EC2 instances back to your on-premise infrastructure, enabling you to seamlessly deploy and move your workload between your IT infrastructure and the AWS cloud.

This feature is available at no additional charge beyond standard usage charges for Amazon S3 and EBS.

We are excited to announce that you can now manage the listeners, SSL certificates, and SSL ciphers for your Elastic Load Balancers from within the AWS Console. This enhancement makes it even easier to get started with Elastic Load Balancing and simpler to maintain a highly available application using Elastic Load Balancing. While this functionality has been available via the API and command line tools, we heard from many customers that it was critical to be able to use the AWS Console to manage these settings on existing load balancers.

With this update, you can add a new listener with a front-end protocol/port and back-end protocol/port after your load balancer has been created. If the listener uses encryption (HTTPS or SSL listeners), then you can create or select the SSL certificate (if applicable), and SSL ciphers and protocols to accept. For an existing load balancer, you can now update the certificate directly from the console or alter the SSL protocols and ciphers presented to clients.

In addition to the AWS Console updates, we have also expanded IPv6 support for Elastic Load Balancing to include the US West (Northern California) and US West (Oregon) regions.

Amazon VPC allows you to define a virtual network topology and customize the network configuration to closely resemble a traditional network that you might operate in your own datacenter. You can now take advantage of the manageability, availability and scalability benefits of Amazon RDS DB Instances in your own isolated network.

We are excited to announce that AWS Elastic Beanstalk is now available in the EU (Ireland) region. Developers can now leverage the service in the US East (North Virginia) region, the Asia Pacific (Tokyo) region, and the EU (Ireland) region.

AWS Elastic Beanstalk provides an easy way for you to quickly deploy and manage applications in the AWS cloud. You simply upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto scaling, and application health monitoring.

Elastic Beanstalk currently supports Java applications running on the familiar Apache Tomcat stack, .NET applications running on IIS 7.5, and PHP applications running on the Apache HTTP Server stack. Elastic Beanstalk allows you to deploy and manage your applications using a set of tools, including the AWS Management Console, Git deployment and the command line interface, the AWS Toolkit for Visual Studio, and the AWS Toolkit for Eclipse. With Elastic Beanstalk, you retain full control over the AWS resources powering your application, such as Amazon EC2 instances, Elastic Load Balancing, and Auto Scaling.

There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications. To get started, visit the AWS Elastic Beanstalk Developer Guide.

We are excited to announce that Amazon Simple Email Service (Amazon SES) now supports domain verification. Now, instead of needing to verify each email address you want to send from, you can simply verify your entire domain and send from any email address on it.

You can verify any domain in the Amazon SES tab of the AWS Management Console, or you can verify domains via the API. You can use whichever method works best for you. This feature is also integrated with Amazon Route 53 so that you can verify any domain you manage with Amazon Route 53 with a few clicks of your mouse.

To make Amazon SES even easier to use, we have also expanded the number of verified email addresses and domains that you can have in your account from 100 to 1,000.

We’re very excited to announce that you can now use Amazon CloudFront to deliver dynamic content. This new capability gives you a simple, cost-effective way to improve the performance, reliability and global reach of your dynamic websites. Amazon CloudFront works seamlessly with dynamic applications running in Amazon EC2 or your origin running outside of AWS without any custom coding or proprietary configurations, making the service simple to deploy and manage. And, there are no additional costs beyond Amazon CloudFront’s existing low prices for data transfer and requests, and no long-term commitments for use.

Amazon CloudFront can now deliver all of your content, including the dynamic portions of your site that change for each end-user. First, you can configure multiple origin servers for your Amazon CloudFront distribution. This allows you the flexibility to keep your content in different origin locations without the need to create multiple distributions or manage multiple domain names on your website. Second, you can include query string parameters to help customize your web pages for each viewer. Third, you can configure multiple cache behaviors for your download distribution based on URL patterns on your website. These cache behaviors give you granular control over how you want Amazon CloudFront to cache different portions of your website. In addition, Amazon CloudFront has implemented several performance optimizations that accelerate the delivery of your dynamic website from the origin to your end users. These performance improvements include maintaining persistent connections with the origin and other network path optimizations to speed up the delivery of dynamic content.

Research Finds that Running SAP on AWS Provides Costs Savings of Up to 69%!

As more enterprises seek to understand cost savings of cloud deployments, Germany based consulting firm, VMS AG leveraged its industry-leading intelligence to develop a detailed cost analysis of running SAP on AWS using the best practices of more than 2,600 SAP environments. The research discovered that running SAP applications on AWS provides infrastructure savings of up to 69% compared to an SAP Business All-in-One system running on-premises. Download the full TCO White Paper today to learn more.

We are pleased to announce that you can now receive billing alerts that help you monitor the charges on your AWS bill. Starting today, you can set up an alert to be notified automatically via e-mail when estimated charges reach a threshold that you choose. To get started, visit the
AWS billing console to enable monitoring for your AWS charges and set your first billing alert on your total AWS charges. Within minutes, you will also be able to set alerts for charges related to any of the individual AWS services or accounts that you use. Setting up a billing alert is simple. First, provide an e-mail address to subscribe to the alert, select the charges you want to monitor, and enter a threshold. Once you do that, you will receive a subscription confirmation e-mail from AWS sent to each address that you provided. Click the confirmation link to complete setup. Your alert will then be active and you will receive a notification as soon as your estimated charges exceed the threshold you chose. Each alert uses one
Amazon CloudWatch alarm to monitor charges and one
Amazon SNS topic to send the alert email, charged at standard rates. You can use up to 10 alarms and 1,000 e-mail notifications free each month as part of the
AWS Free Tier, and most customers will be able to use billing alerts at no additional charge. To learn more, visit the
billing alerts page or view
Monitor Your Estimated Charges in the
Amazon CloudWatch Developer Guide. Sincerely,
The Amazon Web Services team

We are delighted to announce two new features for AWS Storage Gateway.

First, you can now configure your AWS Storage Gateway resources programmatically through an API. You can use this API to automate your backup and disaster recovery workflows. You can develop scripts ahead of time so that, when necessary, you can quickly and seamlessly shift your operations to Amazon EC2. This API can be used directly or, if you prefer, you can use the AWS SDKs for .NET, PHP, and Java. Learn more by visiting the AWS SDK page, or referring to the AWS Storage Gateway User Guide.

Second, we're announcing AWS Storage Gateway support for AWS Identity and Access Management (IAM). With IAM, you can easily control which users in your organization have access to your AWS Storage Gateway resources. You can create multiple Users within your AWS Account, and specify which AWS Storage Gateway actions a User or a group of Users can perform on specific gateways. IAM also makes it easy to enable or disable a User's access to your gateways, simplifying and securing management of access credentials. Learn more by visiting the IAM page, or referring to the AWS Storage Gateway User Guide.

Amazon RDS for SQL Server: Deploying and managing databases is one of the most complex, time-consuming, and expensive activities in IT. Amazon RDS removes this complexity and makes it easy to set up, operate, and scale a relational database in the cloud, fully managing database administration tasks such as software installation and patching, database and log back-ups for disaster recovery, and monitoring. Businesses of all sizes have taken advantage of Amazon RDS to offload the operational responsibilities of their MySQL and Oracle databases. With this launch, Amazon RDS brings the same benefits to all SQL Server customers. Below are the key highlights of the service:

Free Usage Tier: If you are new to Amazon RDS, you can get started with Amazon RDS for SQL Server with a Free Usage Tier, which includes 750 hours per month of Amazon RDS micro instances with SQL Server Express Edition, 20GB of database storage and 10 million I/O requests per month for a full year.

Flexible pricing options: Beyond the Free Usage Tier, you can run SQL Server on Amazon RDS using the “License Included” or the “Bring Your Own License” service models, with prices starting at $0.035/hour. Refer to Amazon RDS for SQL Server pricing for more details.

ASP.NET Support for Elastic Beanstalk: AWS Elastic Beanstalk gives you an easy way to quickly deploy and manage your Java, PHP and as of today, ASP.NET applications in the AWS cloud. Below are the key highlights of the service:

Compatible and Inexpensive: Because Elastic Beanstalk leverages the familiar IIS 7.5 software stack, existing applications can be deployed with minimal changes to the underlying code. There is no additional charge for Elastic Beanstalk, and you pay only for the AWS resources needed to run your applications.

We are excited to announce support for several new instance sizes as well as support for new Microsoft SQL Server versions that will expand your options and lower the cost of running many of your SQL server database workloads on Amazon EC2. Starting today we are:

Offering the ability to run SQL Server on our Standard Small and Medium instance types, lowering the minimum size of a SQL Server database instance on EC2.

Providing support for SQL Server Web edition, which provides customers with internet-facing workloads similar functionality to SQL Standard at a significantly lower cost.

In addition, we are excited to announce support for Microsoft SQL Server 2012 on EC2 at no additional cost over existing Microsoft SQL Server offerings. Customers now have immediate access to Amazon published AMIs for Express, Web, and Standard editions of SQL Server 2012.

We are excited to announce new Amazon RDS for Oracle service capabilities and Multiple Availability Zone (Multi-AZ) deployment improvements.

New Multi-AZ deployment capability for Amazon RDS for Oracle

Multi-AZ is a deployment option that significantly enhances database availability by synchronously replicating updates made to a primary DB instance to a standby instance located in a separate Availability Zone (AZ) within the same AWS Region. Many Amazon RDS MySQL customers already use the Multi-AZ option to increase the reliability of their production deployments and we are excited to make this feature available to Amazon RDS for Oracle customers.

New Console and API Option to trigger a Failover in Multi-AZ deployments

Many customers have requested the ability to initiate a failover from their primary to their standby DB instance. The typical use cases for this feature include testing the resilience of a new application by forcing a failover. The results can help customers tune DNS caching and connection retry mechanisms. To support this and other use cases, Amazon RDS has added a new console and API option to give Multi-AZ customers the ability to trigger a failover from primary to standby when rebooting a DB instance. This option is available immediately for Amazon RDS for MySQL and Amazon RDS for Oracle.

New Multiple Character Set Support Option for Amazon RDS for Oracle

Oracle provides the capability to set a character set preference that controls which languages you can represent in your database. Amazon RDS for Oracle now enables customers to specify their preferred character sets when creating new DB instances. Customers can now specify any of thirty character sets, including Shift-JIS, when creating new DB instances.

Cluster Compute Eight Extra Large (cc2.8xlarge) instances provide you with on-demand supercomputing class performance by giving you access to the latest Intel Xeon processors and high bandwidth, low latency networking. Many of our customers in life sciences, oil and gas, manufacturing, space exploration, and business computing have asked for the ability to launch cc2.8xlarge instance into an Amazon Virtual Private Cloud (VPC) for improved network management and isolation for business critical tasks. At this time cc2.8xlarge instances can be launched into a VPC in a single Availability Zone in the US-East (N. Virginia) region. We will add support for additional Availability Zones and Regions in the coming months.

For more information or to get started with Cluster Compute instances in Amazon VPC, visit the Amazon EC2 User Guide.

We are excited to announce that AWS CloudFormation now supports the creation of Amazon Virtual Private Cloud (VPC) resources. AWS CloudFormation allows you to easily provision, manage and update a collection of related AWS resources. Amazon Virtual Private Cloud (Amazon VPC) lets you create a private, isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define.

Now, you can create new Virtual Private Clouds (VPC), subnets, gateways, network ACLs, routes and route tables using CloudFormation templates. CloudFormation templates are declarative and allow you to easily enumerate what VPC resources, configuration values and interconnections you need to implement a VPC with public subnets, private subnets, and hardware VPN access.

Resource types such as Amazon EC2 instances, security groups and Elastic IP addresses, Elastic Load Balancers, Auto Scaling Groups and Amazon RDS Database instances can already be deployed into any existing Amazon VPC using CloudFormation templates. A CloudFormation can now fully represent your VPC configuration along with all the resources needed to run your application in the VPC.

We are excited to announce that Amazon DynamoDB is now available in the US West (Oregon), US West (Northern California), and Asia Pacific (Singapore) Regions. These new Regions join the US East (Northern Virginia), EU (Ireland), and Asia Pacific (Tokyo) Regions as locations in which DynamoDB is available.

Amazon DynamoDB is a fully-managed NoSQL database service that provides extremely fast and predictable performance with seamless scalability. With a few clicks in the AWS Management Console, you can easily create a new DynamoDB database table or scale your table’s request capacity to the level that you need without incurring any downtime.

To learn more about DynamoDB, you can attend our webinar on Wednesday, May 16th, from 10am-11am PDT. Register here.

We are excited to announce that AWS Elastic Beanstalk is now available in the Asia Pacific (Tokyo) region. Developers can now leverage the service in both the Asia Pacific (Tokyo) region and the US East (North Virginia) region.

AWS Elastic Beanstalk provides an easy way for you to quickly deploy and manage applications in the AWS cloud. You simply upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

Elastic Beanstalk currently supports Java applications running on the familiar Apache Tomcat stack, and PHP applications running on the Apache HTTP Server stack. Elastic Beanstalk allows you to deploy and manage your applications using a set of tools, including the AWS Management Console, Git deployment and the command line interface, and the AWS Toolkit for Eclipse. With Elastic Beanstalk, you retain full control over the AWS resources powering your application, such as Amazon EC2 instances, Elastic Load Balancing, and Auto Scaling.

There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications. To get started, visit the Elastic Beanstalk Developer Guide.

When you find the software you’d like to purchase, you can use AWS Marketplace’s 1-Click deployment to quickly launch pre-configured server images, or deploy with familiar tools like the AWS Console. You’ll be charged for what you use, by the hour or month, and software charges will appear on the same bill as your other AWS services.

We are excited to announce the availability of the Microsoft SharePoint Server on AWS Reference Architecture White Paper.

This white paper discusses general concepts regarding how to run SharePoint on AWS and provides detailed technical guidance on how to configure, deploy, and run a SharePoint Server farm on AWS. It illustrates reference architecture for common SharePoint Server deployment scenarios, such as an intranet SharePoint Server farm or internet Website or service based on SharePoint Server, and discusses their network, security, and deployment configurations in detail so you can run SharePoint Server workloads in the cloud with confidence.

This white paper is targeted to IT infrastructure decision-makers and administrators. After reading it, a customer should have a good idea on how to set up and deploy the components of a typical SharePoint Server farm on AWS.

We are excited to announce Amazon CloudSearch, a fully-managed search service in the cloud that allows customers to easily integrate fast and highly scalable search functionality into their applications.

Amazon CloudSearch enables search functionality for your website or application without the administrative burdens of operating and scaling a search service. You don’t have to worry about hardware provisioning, data partitioning, setup and configuration, or software patches.

Built for high throughput and low latency, Amazon CloudSearch supports a rich set of features including free text search, faceted search, customizable relevance ranking, configurable search fields, text processing options, and near real-time indexing.

We are excited to announce the availability of Reserved Cache Nodes for Amazon ElastiCache. This allows you to save up to 70% over On Demand prices while adding an in-memory cache to your application architecture in a matter of minutes.

Amazon ElastiCache improves the performance of your web applications by retrieving data from a fast, in-memory cache instead of relying entirely on disk-based storage. Unlike other caching mechanisms, Amazon ElastiCache is fully-managed so you don’t have to worry about maintaining your own caching infrastructure. In addition, it is Memcached-compatible, so if you have existing Memcached-enabled applications, they will work with ElastiCache without any code changes.

Amazon ElastiCache can reduce the load on your databases significantly and improve throughput for read-heavy or compute-intensive workloads including:

Social Networking

Mobile and Social Gaming

E-Commerce Sites

Media Sites

Recommendation Engines

To learn more about Amazon ElastiCache and saving money with Reserved Cache Nodes, please visit the Amazon ElastiCache detail page.

We are excited to announce the launch of Live Smooth Streaming for Amazon CloudFront. Smooth Streaming is a feature of Internet Information Services (IIS) Media Services that enables adaptive streaming of live media to Microsoft Silverlight clients. You can also use this solution to deliver your live stream to Apple’s iOS devices using the Apple HTTP Live Streaming (HLS) format. And you can benefit from the scale and low-latency offered by Amazon CloudFront when delivering your live Smooth Streams.

We've made it simple to get started by creating an AWS CloudFormation template that provisions the AWS resources you need for your live event. You only pay for the AWS resources you consume, and have full control over the origin server (Amazon EC2 instance running Windows Media Services) so you can configure additional IIS Live Smooth Streaming functionality for your specific needs.

We are pleased to announce that you can now use Amazon CloudFront with Adobe Flash Media Server 4.5 running on Amazon EC2 to configure live HTTP streaming for both Flash-based and Apple iOS devices. With this solution, you can easily and cost-effectively deliver your live video via AWS to multiple platforms and to a world-wide audience, while paying for the AWS resources you consume.

Our improved live streaming solution uses the lower minimum content expiration period we recently announced for Amazon CloudFront. With this feature, both long-lived live video fragments and the frequently updated live manifest file can be cached at Amazon CloudFront edge locations. This helps you scale your live event without having to scale the origin infrastructure (Amazon EC2 instance running Adobe Flash Media Server 4.5).

We'd also like to invite you to register for our webinar with speakers from both Amazon CloudFront and Adobe on May 4th, 2012, where we will be providing an overview and a demo of our improved live streaming solution.

AWS is introducing a limited time free trial program to help new Amazon RDS customers get started. Customers eligible for the free trial will be able to run one free Single-AZ Small DB Instance for sixty (60) days, along with 20 GB of database storage capacity and 10 million IO requests. You can run a MySQL or Oracle database (using the BYOL model) on your free DB Instance. The free trial is now available in all regions.

We are excited to announce Latency Based Routing (LBR) for Amazon Route 53, AWS’s highly reliable cost-effective DNS service. LBR, one of Amazon Route 53’s most requested features, helps you improve your application’s performance for a global audience. LBR works by routing your customers to the AWS endpoint (e.g. EC2 instances, Elastic IPs or ELBs) that provides the fastest experience based on actual performance measurements of the different AWS regions where your application is running.

We are excited to announce that AWS Elastic Beanstalk now supports a PHP runtime and Git deployment. Elastic Beanstalk already makes it easier to quickly deploy and manage Java applications in the AWS cloud. Now, Elastic Beanstalk offers the same functionality for your PHP applications.

AWS Elastic Beanstalk leverages AWS services such as Amazon EC2, Amazon S3, Elastic Load Balancing, Auto Scaling, and Amazon Simple Notification Service, to deliver the same highly reliable, scalable, and cost-effective infrastructure to run your PHP applications. You can easily launch a PHP environment using the AWS Management Console or the Elastic Beanstalk command line interface.

You can now set up your Git repositories to directly deploy changes to your AWS Elastic Beanstalk environments. Git speeds up deployments by only pushing your modified files to AWS Elastic Beanstalk. In seconds, PHP applications get updated on a set of Amazon EC2 instances.

We're excited to announce that you can now use Amazon CloudFront for frequently changing content. Before today, Amazon CloudFront’s edge locations cached objects for a minimum of 60 minutes. Effective today, we’ve removed the sixty minute minimum expiration period (also known as “time-to-live” or TTL) from Amazon CloudFront. With this change, you have the ability to configure a minimum TTL for all objects in your distribution using the Amazon CloudFront API. The minimum TTL value may be as short as 0 seconds. You can then set the TTL for each file by setting the cache control header on your file in the origin. Amazon CloudFront uses cache control headers to determine how frequently each edge location gets an updated version of the file from the origin. Note that our default behavior isn’t changing; if no cache control header is set, each edge location will continue to use an expiration period of 24 hours before checking the origin for changes to that file. You can also continue to use Amazon CloudFront's Invalidation feature to expire a file sooner than the TTL set on that file.

The automated backup feature of Amazon RDS enables point-in-time recovery for your DB Instance, allowing you to restore your DB Instance to any second during your retention period, up to the last five minutes.

Many customers have requested for longer backup retention periods for compliance purposes. We are excited to announce that the maximum retention period for automated backups has been increased from eight days to thirty five days. This new limit will enable you to store more than a month of backups. To modify the retention period for your DB instance to a higher limit, please visit the AWS Management Console.

The AWS Storage Gateway is now available in the South America (Sao Paulo) Region. Starting today, you can securely upload your on-premises application data to this Region for cost effective backup and rapid disaster recovery.

The AWS Storage Gateway service connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between your on-premises IT environment and AWS’s storage infrastructure. The service supports a standard iSCSI interface, enabling you to take advantage of cloud based storage without re-architecting existing applications. It provides low-latency performance by maintaining data on your on-premises storage hardware while asynchronously uploading this data to AWS, where it is encrypted and securely stored in the Amazon Simple Storage Service (Amazon S3). Using the AWS Storage Gateway, you can back up point-in-time snapshots of your on-premises application data to Amazon S3 for future recovery. In the event you need replacement capacity for disaster recovery purposes, or if you want to leverage Amazon EC2’s on-demand compute capacity for additional capacity during peak periods, for new projects, or as a more cost-effective way to run your normal workloads, you can use the AWS Storage Gateway to mirror your on-premises data to Amazon EC2 instances.

If your on-premises applications are running in South America, using the Sao Paulo Region will reduce the time to upload data to AWS. You can also use this Region to meet any requirements you have for keeping data in Brazil. Learn more by visiting the
AWS Storage Gateway page or get started by visiting the
AWS Management Console.

We are announcing three new features that will make it even easier for you to use Amazon EC2 for your application and development needs.

Customers can now launch 64-bit Amazon Machine Images (AMIs) on m1.small and c1.medium instances. This capability allows you to scale across micro, standard, high CPU and high memory EC2 instances with a single 64-bit AMI.

We are launching the thirteenth Amazon EC2 instance type, m1.medium. m1.medium instances are ideal for many applications that require a reasonable amount of CPU and memory, but do not require all the resources of an m1.large instance. This new instance type supports both 32 and 64-bit AMIs.

Customers can now log into their Linux instances directly from within the Amazon EC2 management console without the need to install additional software clients

To learn more about Amazon EC2 and to try these new features, please visit the Amazon EC2 Page

We are excited to announce that Amazon DynamoDB is now available in the EU (Ireland) Region. The EU (Ireland) region joins the US East (Northern Virginia) and Asia Pacific (Tokyo) regions as the third region in which customers can run the service.

Amazon DynamoDB is a fully managed NoSQL database service that provides extremely fast and predictable performance with seamless scalability. With a few clicks in the AWS Management Console, you can easily create a new DynamoDB database table, or scale your table’s request capacity to the level that you need without incurring any downtime.

To learn more about DynamoDB, you can attend our webinar on Tuesday, March 20th, from 10am-11am GMT. Register here.

We are excited to announce that AWS Elastic Beanstalk now supports resource permissions through AWS Identity and Access Management (IAM). Whether you are an agency building applications for different customers, a large organization with many development teams, or a single developer, you now have fine grained control over which IAM users and groups have access to your Elastic Beanstalk resources.

AWS Elastic Beanstalk provides a quick way to deploy and manage applications in the AWS cloud. IAM enables you to manage permissions for multiple users within your AWS account. With Elastic Beanstalk and IAM, you now have fine grained access controls over specific Elastic Beanstalk resources such as applications, application versions, and environments. For example, if you have multiple development teams working on different Elastic Beanstalk applications, you can allow specific developers to create application versions for a specific application and only allow them to deploy to a staging environment. You can then allow a trusted IAM user or group of trusted IAM users to deploy the application versions in the staging environment to a production environment.

We are excited to announce a reduction in Amazon EC2, Amazon RDS, and Amazon ElastiCache prices. Reserved Instance prices will decrease by up to 37% for Amazon EC2 and by up to 42% for Amazon RDS across all regions. On-Demand prices for Amazon EC2, Amazon RDS, and Amazon ElastiCache will drop by up to 10%. We are also introducing volume discount tiers for Amazon EC2, so customers who purchase a large number of Reserved Instances will benefit from additional discounts. Today’s price drop represents the 19th price drop for AWS, and we are delighted to continue to pass along savings to you as we innovate and drive down our costs.

All of your On-Demand usage will automatically be charged at the new lower rate as of March 1st. New Reserved Instance prices will only apply to Reserved Instances purchases made on or after March 6th. With the new pricing, Reserved Instances will provide savings of up to 71% compared to On-Demand instances, so you may want to take this opportunity to review your current usage and to determine if you would like to purchase additional Light, Medium, or Heavy Utilization Reserved Instances.

We are excited to announce that Amazon DynamoDB is now available in the Asia Pacific (Tokyo) Region. The Tokyo region joins the US East (Northern Virginia) Region as the second location in which customers can run the service.

Amazon DynamoDB is a fully managed NoSQL database service that provides extremely fast and predictable performance with seamless scalability. With a few clicks in the AWS Management Console, you can easily create a new DynamoDB database table, or scale your table’s request capacity to the level that you need without incurring any downtime.

We are excited to announce that Amazon ElastiCache is now available in two additional Regions: US West (Oregon) and South America (Sao Paulo). Starting today, you can use Amazon ElastiCache in these Regions to add an in-memory cache to your application architecture in a matter of minutes.

We are excited to announce Amazon Simple Workflow Service (Amazon SWF), a workflow service for coordinating the various processing steps in your application and managing distributed execution state. Whether automating business processes for finance or insurance applications, building sophisticated data analytics applications, or managing cloud infrastructure services, Amazon SWF reliably coordinates all of the processing steps within an application.

Amazon SWF provides:

Consistent execution state management. You can rely on Amazon SWF to reliably track the execution state of an application across distributed components. The components themselves including the application flow logic do not have to deal with maintaining distributed execution state.

Reliable task distribution. Amazon SWF guarantees non-duplicated dispatch of tasks to application components and allows you to control the routing of tasks. Using such features, you can easily implement even complex application flows.

Ease of use. You can easily use Amazon SWF without having to learn new programming languages. You can use a combination of programming languages in implementing the application components and the application flow logic.

Full control over application execution. Amazon SWF manages task execution dependencies, scheduling and concurrency based on the application flow that you define. You have complete freedom in building, deploying and selectively scaling application components.

The AWS Flow Framework, a programming framework that helps developers easily incorporate asynchronous and event-driven programming into their applications using Amazon SWF.

To learn more about the service, visit the Amazon SWF page. You can easily get started with Amazon SWF with the free tier of service. You can also run a sample workflow in the AWS Management Console.

We are excited to announce that we’ve reduced the Amazon S3 standard storage prices in all regions, effective February 1st. With this price change, all Amazon S3 standard storage customers will see a reduction in their storage costs. For instance, if you store 50 TB of data on average, you’ll see a 12% reduction in costs, and if you store 500 TB of data on average, you’ll see a 13.5% reduction in costs. The price reduction applies to all standard storage- both existing storage and new added storage. You can find the new updated prices on the Amazon S3 Detail Page. We are happy to pass along these savings to you as we continue to innovate and drive down our costs.

We are pleased to announce AWS CloudFormation support for Amazon Virtual Private Cloud (Amazon VPC). AWS CloudFormation allows you to easily provision, manage and update a collection of related AWS resources. With Amazon VPC, you can now define a virtual network topology using a CloudFormation template and customize the network configuration to closely resemble a traditional network that you might operate in your own datacenter.

All resource types such as Amazon EC2 instances, security groups and Elastic IP addresses, Elastic Load Balancers, Auto Scaling Groups and Amazon RDS Database instances can now be deployed into any existing Amazon VPC using CloudFormation templates. The templates allow you to run multi-tier web applications and corporate applications in a private network. With Amazon VPC and CloudFormation, you can easily control which resources you want to expose publicly and which ones should be private.

We’re excited to announce the launch of new edge locations in Osaka, Japan and Milan, Italy to serve Amazon CloudFront and Amazon Route 53. Each new edge location helps lower latency and improves performance for your end users. We plan to continue to add new edge locations worldwide.

With the launches in Osaka and Milan, our first edge location in Italy, Amazon CloudFront and Amazon Route 53 now have a total of 26 edge locations worldwide. Each new edge location helps lower latency and improves performance for your end users. These are the first two edge location launches of 2012 and we plan to continue to add new edge locations worldwide.

We are excited to announce the addition of two new features for Gold and Platinum Premium Support customers - 3rd Party Software Support and AWS Trusted Advisor. With these support enhancements, customers of all sizes and technical abilities can continue drawing on AWS’s broad experience to successfully utilize the products and features provided by the AWS cloud.

3rd Party Software Support: Providing 3rd Party Software Support enables customers to work directly with Premium Support on questions related to the customer’s Amazon Elastic Compute Cloud (EC2) instance operating system as well as the configuration and performance of the most popular 3rd Party Software components on AWS. This support covers most widely used operating systems including Windows, Ubuntu Linux, Red Hat Linux, Novell’s SuSE Linux, and Amazon Linux, as well as systems software including Apache, IIS, Amazon SDKs, FTP, Sendmail, and Postfix. AWS support engineers will now be available to address common problems that derail customers in the setup, configuration, and troubleshooting of their infrastructure components. In addition, through the use of desktop sharing technology, customers will have the option to share their desktops in real time with support engineers, allowing for a more hands on support experience.

AWS Trusted Advisor: Building on our aggregated history of serving hundreds of thousands of customers, the AWS Trusted Advisor program is a best practices auditing service that monitors a customer’s use of AWS services and automatically recommends configuration changes or new services that may save money, improve system performance, or close security gaps. AWS has built systems to regularly run best practice feature checks against a customer’s AWS environment to flag potential opportunities for improvement. AWS Trusted Advisor launches with 8 checks and additional ones will be continuously deployed throughout the year. A more detailed description of AWS Trusted Advisor can be found in
Jeff Barr’s Blog.

We are pleased to announce support for running Amazon RDS DB Instances in Amazon Virtual Private Cloud (Amazon VPC). With Amazon VPC, you can define a virtual network topology and customize the network configuration to closely resemble a traditional network that you might operate in your own datacenter.

You can now take advantage of the manageability, availability and scalability benefits of Amazon RDS DB Instances in your own isolated network. The same functionality of Amazon RDS including managing backups, replication for high availability, automatic failure detection and recovery, software patching, and ease of scaling your compute capacity and storage based on your application demand, are now available in Amazon VPC.

We are excited to announce the public beta launch of the AWS Storage Gateway, a new service connecting an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure. The service enables you to securely upload data to the AWS cloud for cost effective backup and rapid disaster recovery. The AWS Storage Gateway supports industry-standard storage protocols that work with your existing applications. It provides low-latency performance by maintaining data on your on-premises storage hardware while asynchronously uploading this data to AWS, where it is encrypted and securely stored in the Amazon Simple Storage Service (Amazon S3).

Using the AWS Storage Gateway, you can back up point-in-time snapshots of your on-premises application data to Amazon S3 for future recovery. In the event you need replacement capacity for disaster recovery purposes, or if you want to leverage Amazon EC2’s on-demand compute capacity for additional capacity during peak periods, for new projects, or as a more cost-effective way to run your normal workloads, you can use the AWS Storage Gateway to mirror your on-premises data to Amazon EC2 instances. Learn more by visiting the AWS Storage Gateway page or check out the Introducing Storage Gateway Video.

We are excited to announce that AWS Identity and Access Management (IAM) has extended support for identity federation, now enabling federated users to access the AWS Management Console. Identity federation enables you to use your existing corporate identities to grant secure and direct access to the AWS Management Console without creating a new AWS identity for those users. You can enable your users to sign in to your corporate network and then access the AWS Management Console without having to sign in to AWS, providing them single sign on access to AWS. This complements IAM’s existing identity federation capability that enables you to provide federated users secure and direct access to AWS service APIs.

We are excited to announce the immediate availability of Amazon DynamoDB, a fully managed NoSQL database service that provides extremely fast and predictable performance with seamless scalability. With a few clicks in the AWS Management Console, you can easily create a new DynamoDB database table, or scale your table’s request capacity to the level that you need without incurring any downtime.

We are excited to announce that starting today, the AWS Free Usage Tier will now include Amazon EC2 instances running Microsoft Windows Server. Customers eligible for the AWS Free Usage tier can now use up to 750 hours per month of t1.micro instances running Microsoft Windows Server for free.

With this announcement, customers familiar with Windows Server can gain hands-on-experience with AWS at no-cost. Customers can select from a range of pre-configured Amazon Machine Images with Microsoft Windows Server 2008 R2. Once running, customers can connect via Microsoft Remote Desktop Client to begin building, migrating, testing, and deploying their web applications on AWS in minutes. The expanded Free Usage Tier with Microsoft Windows Server t1.micro instances is available today in all regions, except for AWS GovCloud.

We are excited to announce four new AWS Direct Connect locations at CoreSite One Wilshire in Los Angeles, TelecityGroup Docklands in London, Equinix in Singapore, and Equinix in Tokyo. These locations serve the US West (Northern California), EU (Ireland), Asia Pacific (Singapore) and Asia Pacific (Tokyo) Regions.

AWS Direct Connect offers several benefits for customers:

It lowers bandwidth costs out of AWS, which is valuable for applications that have bulk data transfer requirements

It offers more consistent network performance over Internet-based connections for applications that require real-time data feeds

It provides an alternative means to connect to the AWS cloud for customers who may have security or compliance policies that prevent VPN connectivity to the cloud

To get started with AWS Direct Connect, click here to complete a sign-up form indicating your connection requirements (primarily the number of connections you require and connection bandwidth). AWS will contact you, usually within one business day.