If you are subscribed to the blog through the RSS feed, you should not need to make any changes.

The new blog (pictured at right) has a fresh, clean design, and it is responsive to boot! As is the case with every part of AWS, we will continue to enhance it in the days and weeks after today's launch. All 90 of my posts from 2014 are already available on the new blog; the rest will be there shortly.

This new location is the first step in a bigger project that will ultimately make all of the AWS blogs available within a common URL structure. We have a lot of news and other information to share and are taking steps now to make sure that you can easily find and enjoy as much of it as possible.

Last month I urged you to download your secret access key(s) for your AWS (root) account in advance of a planned change in our access model.

We have implemented the change and you can no longer retrieve existing secret access keys for the root account. If you lose your secret access key, you must generate a new access key (an access key ID and a secret access key).

Now is a great time to make a commitment to follow our best practices and create an IAM user that has access keys, instead of relying on root access keys. Using IAM will allow you to set up fine-grained control over access to your AWS resources.

Amazon DynamoDB is a fast, fully-managed NoSQL database. You can easily create tables, provision the desired amount of read and write capacity, and then store as much information as you'd like. Each item in a DynamoDB table consists of one or more key/value pairs, indexed by a hash key or a combination of a hash key and a range key (read more about the DynamoDB data model), with additional indexing options in the form of Local and Global Secondary Indexes.

NoSQL databases like DynamoDB are designed for scalability and consistent, high performance. This stands in marked contrast to the traditional relational database model where these attributes are not always as important as flexibility and generality.

Today we are making DynamoDB even more flexible with additional options for filtering query results and for updating existing items. Let's take a look!

Query FilteringDynamoDB's Query function retrieves items using a primary key or an index key from a Local or Global Secondary Index. Each query can use Boolean comparison operators to control which items will be returned.

With today's release, we are extending this model with support for query filtering on non-key attributes. You can now include a QueryFilter as part of a call to the Query function. The filter is applied after the key-based retrieval and before the results are returned to you. Filtering in this manner can reduce the amount of data returned to your application while also simplifying and streamlining your code.

The QueryFilter that you pass to the Query API must include one or more conditions. Each condition references an attribute name and includes one or more attribute values, along with a comparison operator. In addition to the usual Boolean comparison operators, you can also use CONTAINS, NOT_CONTAINS, and BEGINS_WITH for string matching, BETWEEN for range checking, and IN to check for membership in a set.

In addition to the QueryFilter, you can also supply a ConditionalOperator. This logical operator (either AND or OR) is used to connect each of the elements in the QueryFilter.

Updating ItemsDynamoDB's PutItem, UpdateItem, and DeleteItem functions can optionally perform a conditional update of an item. This feature allows two or more processes to make concurrent updates to a particular item in a controlled fashion. Let's say you are using a couple of DynamoDB table to track real-time data that is flowing in from a sensor network. You can use the UpdateItem function to implement a highly reliable counting system in a scalable way. Your code would do the following:

Call GetItem to read the item.

Extract the count field from the item and increment it by 1.

Call UpdateItem with the current and the new values of the count field.

Return to step 1 if UpdateItem indicates that the current value is incorrect.

With today's release, you have additional options to specify the condition that must hold for the item to be replaced (put), updated, or deleted. You can also specify multiple conditions, connected by AND or OR.

For example, you could subtract an amount from a bank balance (indicating a withdrawal), only if the balance is greater than or equal to the amount of the withdrawal.

The AWS SDKs for .NET, Ruby, Java, and JavaScript have been updated and you can start using these powerful new features today. We plan to update the Python and PHP SDKs in the next couple of days.

You can use Amazon ElastiCache to implement an in-memory storage layer between your application code and your database using the Memcached or Redis engines. Adding an in-memory storage layer can often lead to dramatic speed improvements for database accesses.

Regardless of which engine you choose, ElastiCache makes it easy to deploy, operate, and scale a cloud-based in-memory storage layer.

Backup and Restore Today we are making ElastiCache for Redis even more powerful with the addition of backup and restore functionality. You can now create a snapshot of your entire Redis cluster as it exists at a specific point in time. You can schedule automatic, recurring daily snapshots and you can initiate a manual snapshot at any time.

The snapshots are stored in Amazon S3 with high durability, and can be used for warm starts, backups, and archiving. Restoring a cache snapshot creates a new ElastiCache for Redis cluster and populates it with the data from the snapshot. You can even create multiple ElastiCache for Redis clusters from a single snapshot. This can be handy for performance-sensitive applications that work best when running from a warm (well-populated) cache.

Console TourLet's take a tour of the new Backup and Restore functions in the AWS Management Console. You can establish a recurring backup schedule when you create a new Cache cluster or you can schedule backups for an existing cluster. You can choose the desired retention period (1 to 35 days), and you can also set the daily time range for automatic backups:

You can also create a snapshot manually, like this:

You can see your automatic and manual snapshots:

And you can restore a snapshot to create a new cluster:

Backup and Restore NotesThis new feature is now available in all AWS Regions where ElastiCache is currently available. You can create a snapshot from any ElastiCache instance type, with the exception of the t1.micro.

ElastiCache provides storage space for one snapshot free of charge for each active ElastiCache for Redis cluster. Storage for additional snapshots is priced at $0.085 / GB per month.

For better performance, we recommend taking snapshots on a read replica instead of on the master cache node. The snapshot process uses the Redis BGSAVE operation and is subject to its strengths and limitations. To be more specific, this operation causes the Redis process to fork into parent and child processes. The parent process continues to handle requests while the child process writes the snapshot to disk. The forking process can increase memory pressure within the cache node, leading to swapping and reduced performance. Moving this work to a read replica will minimize any possible impact on the performance of your application.

The Amazon Relational Database Service (RDS) takes care of almost all of the day to day grunt work that would otherwise consume a lot of system administrator and DBA time. You don't have to worry about hardware provisioning, operating system or database installation or patching, backups, monitoring, or failover. Instead, you can invest in your application and in your data.

Multiple Engine VersionsRDS supports multiple versions of the MySQL, Oracle, SQL Server, and PostgreSQL database engines. Here is the current set of supported MySQL versions:

You can simply select the desired version and create an RDS DB Instance in a minutes.

Upgrade SupportToday we are enhancing Amazon RDS with the ability to upgrade your MySQL DB Instances from version 5.5 to the latest release in the 5.6 series that's available on RDS.

To upgrade your existing instances, create a new Read Replica, upgrade it to MySQL 5.6, and once it has caught up to your existing master, promote it to be the new master. You can initiate and monitor each of these steps from the AWS Management Console. Refer to the Upgrading from MySQL 5.5 to MySQL 5.6 section of the Amazon RDS User Guide to learn more.

For MySQL 5.5 instances that you create after today's release, simply select the Modify option corresponding to the DB Instance to upgrade it to the latest version of MySQL 5.6. If you are using RDS Read Replicas, upgrade them before you upgrade the master.

The InnoDB storage engine now supports binary log access and online schema changes, allowing ALTER TABLE operations to proceed in parallel with other operations on a table. The engine now does a better job of reporting optimizer statistics, with the goal of improving and stabilizing query performance. An enhanced locking mechanism reduces system contention, and multi-threaded purging increases the efficiency of purge operations that span more than one table.

Planning for UpgradesRegardless of the upgrade method that is applicable to your RDS DB Instances, you need to make sure that your application is compatible with version 5.6 of MySQL. Read the documentation on Upgrading an Instance to learn more about this.

AWS Elastic Beanstalk makes it easy for you to deploy and manage applications in the AWS cloud. After you upload your application, Elastic Beanstalk will provision, monitor, and scale capacity (Amazon EC2 instances), while also load balancing incoming requests across all of the healthy instances.

Docker automates the deployment of applications in the form of lightweight, portable, self-sufficient containers that can run in a variety of environments. Containers can be populated from pre-built Docker images or from a simple recipe known as a Dockerfile.

Docker's container-based model is very flexible. You can, for example, build and test a container locally and then upload it to the AWS Cloud for deployment and scalability. Docker's automated deployment model ensures that the runtime environment for your application is always properly installed and configured, regardless of where you decide to host the application.

Today we are enhancing Elastic Beanstalk with the ability to launch applications contained in Docker images or described in Dockerfiles. You can think of Docker as an exciting and powerful new runtime environment for Elastic Beanstalk, joining the existing Node.JS, PHP, Python, .NET, Java, and Ruby environments.

Beanstalk, Meet DockerWith today's launch, you now have the ability to build and test your applications on your local desktop and then deploy them to the AWS Cloud via Elastic Beanstalk.

You can use any desired version of the programming language, web server, and application server. You can configure them as you see fit, and you can install extra packages and libraries as needed.

You can launch existing public and private Docker images. Each image contains a snapshot of your application and its dependencies, and can be created locally using a few simple Docker commands.To use an image with Elastic Beanstalk, you will create a file called Dockerrun.aws.json. This file specifies the image to be used and can also set up a port to be exposed and volumes to be mapped in to the container from the host environment. If you are using a private Docker image, you will also need to create a .dockercfg file, store it in Amazon S3, and reference it from the Authentication section of Dockerrun.aws.json.

You can also use a Dockerfile. The Docker commands contained in such a file will be processed and executed as part of the Auto Scaling configuration established by Elastic Beanstalk. In other words, each freshly created EC2 instance used to host an Elastic Beanstalk application will be configured as directed by your Dockerfile.

Regardless of which option you choose, you always upload a single file to Elastic Beanstalk. This upload can be:

A plain Dockerfile.

A plain Docker.aws.json file.

A Zip file that contains either Dockerfile or Docker.aws.json, along with other application assets.

The third option can be useful for applications that require a number of "moving parts" to be present on the instance. If you are using a Dockerfile, you could also choose to fetch these parts using shell commands embedded in the file.

Docker in ActionLet's create a simple PHP application using Elastic Beanstalk for Docker! The first step is the same for every Elastic Beanstalk application -- I simply fill in the name and the description:

Then I choose Docker as the Predefined Configuration. This application will not need to scale very high, so a single instance environment is fine:

The moving parts are in a single directory, with src and web subdirectories and a Dockerfile at the root:

I zipped them up into a single file like this (note that I had to to explicity mention the .ebextensions directory)

Then I upload the file to Elastic Beanstalk:

With the file uploaded, I can now create an Elastic Beanstalk environment. This will be my testing environment; later I could create a separate environment for production. Elastic Beanstalk lets me configure each environment independently. I can also choose to run distinct versions of my application code in each environment:

The PHP application makes use of a MySQL database so I will ask Elastic Beanstalk to create it for me (I'll configure it in a step or two):

Now I choose my instance type. I can also specify an EC2 keypair; this will allow me to connect to the application's EC2 instances via SSH and can be useful for debugging:

I can also tag my Elastic Beanstalk application and the AWS resources that it creates (this is a new feature that was launched earlier this week):

Now I can configure my RDS instance. The user name and the password will be made available to the EC2 instance in the form of environment variables.

Have you shopped at AWS Marketplace lately? The selection keeps on growing and you can easily find, buy, and starting many different types of Software Infrastructure, Developer Tools, and Business Software:

You don't need to worry about procuring a server, installing and configuring an operating system, or installing the actual software that you set out to use in the first place. AWS Marketplace is a short, direct path that will have you up and running in minutes.

Because you can launch fully installed and configured software in minutes, you can easily try out one or more products until you find the one that is the best match for your requirements.

Security Product TrialFor the next thirty days (April 15, 2014 to May 15, 2014) you can evaluate six leading security products via AWS Marketplace. For each eligible product that you use for at least 120 hours, you will receive $175 in AWS Credits, so you'll only pay for the AWS infrastructure that you consume during the evaluation. At the conclusion of the evaluation period, you will automatically be transitioned into a paid subscription. See the Terms and Conditions for additional information.

We have made some important changes to the EC2 pricing and instance type pages. We are introducing the concept of previous generations of EC2 instances.

Amazon EC2 has been around since the summer of 2006. We started with a single instance (the venerable and still-popular m1.small) and have added many over the years. We have broadened our selection by adding specialized instance families such as CPU-Optimized, Memory-Optimized, and Cluster and by adding a wide variety of sizes within each family.

As newer and more powerful processors have become available, we have added to the lineup in order to provide you with access to the best performance at a given price point. The newest instances are a better fit for new applications and we want to make this clear on our website. To this end, we have moved some of the instance families to a new Previous Generations page. Instances in these families are still available as On-Demand, Reserved Instances and Spot Instances. Here's a list of some previous generations and their contemporary equivalents:

Instance Family

Previous Generation

Current Generation

General Purpose

M1

M3

Compute-Optimized

C1 & CC2

C3

Memory-Optimized

M2, CR1

R3

Storage-Optimized

HI1

I2

GPU

CG1

G2

While we have no current plans to deprecate any of the instances listed above, we do recommend that you choose the latest generation of instances for new applications.