We have made some important changes to the EC2 pricing and instance type pages. We are introducing the concept of previous generations of EC2 instances.

Amazon EC2 has been around since the summer of 2006. We started with a single instance (the venerable and still-popular m1.small) and have added many over the years. We have broadened our selection by adding specialized instance families such as CPU-Optimized, Memory-Optimized, and Cluster and by adding a wide variety of sizes within each family.

As newer and more powerful processors have become available, we have added to the lineup in order to provide you with access to the best performance at a given price point. The newest instances are a better fit for new applications and we want to make this clear on our website. To this end, we have moved some of the instance families to a new Previous Generations page. Instances in these families are still available as On-Demand, Reserved Instances and Spot Instances. Here's a list of some previous generations and their contemporary equivalents:

Instance Family

Previous Generation

Current Generation

General Purpose

M1

M3

Compute-Optimized

C1 & CC2

C3

Memory-Optimized

M2, CR1

R3

Storage-Optimized

HI1

I2

GPU

CG1

G2

While we have no current plans to deprecate any of the instances listed above, we do recommend that you choose the latest generation of instances for new applications.

I talked about the upcoming memory-optimized EC2 instance type (R3) last week and provided you with configuration and pricing information so that you could start thinking about how to put them to use in your environment. I am happy to report that the R3 instances are now available for use in the following AWS Regions:

US East (Northern Virginia)

US West (Northern California)

US West (Oregon)

EU (Ireland)

Asia Pacific (Tokyo)

Asia Pacific (Sydney)

Asia Pacific (Singapore)

Memory-OptimizedR3 instances are recommended for applications that require high memory performance at the best price point per GiB of RAM. The instances include the following features:

The R3 instances are available in five sizes, as follows (prices are in US East (Northern Virginia); see the EC2 pricing page for full information):

Instance Name

vCPU Count

RAM

Instance Storage (SSD)

Price/Hour

r3.large

2

15 GiB

1 x 32 GB

$0.175

r3.xlarge

4

30.5 GiB

1 x 80 GB

$0.350

r3.2xlarge

8

61 GiB

1 x 160 GB

$0.700

r3.4xlarge

16

122 GiB

1 x 320 GB

$1.400

r3.8xlarge

32

244 GiB

2 x 320 GB

$2.800

You can launch the r3.xlarge, r3.2xlarge, and r3.4xlarge instances in EBS-Optimized form, with additional, dedicated I/O capacity for EBS volumes. The r3.8xlarge instance features 10 Gigabit networking.

Customer ReactionSeveral AWS customers have been working with the R3 instances in preparation for today's launch:

Netflix is the world’s leading Internet television network with over 44 million members in 41 countries enjoying more than one billion hours of TV shows and movies per month, including original series. Coburn Watson, Manager of Performance Engineering at Netflix told us:

We run many memory-hungry applications to support the volume of content our customers access. These applications require instances with high memory footprint and high memory bandwidth. By delivering high memory capacity, and high performance, R3 instances address these needs at a low cost and we are already planning to utilize them to support many of our applications and services.

MongoDB is one of the most popular NoSQL options on AWS. It uses aggressive memory caching for its data file management and benefits from access to copious amounts of memory. Matt Asay, VP of Marketing and Business Development at MongoDB, told us:

R3 instances provide a broad spectrum of compute and memory scaling options for our customers to realize full memory caching potential of MongoDB. Our customers can start with a smaller instance for testing and early development, and scale to larger R3 instances as they move to production.

Metamarkets enables buyers and sellers of digital advertising to understand and visualize large quantities of data in real-time. Patrick McBride, Head of Technical Operations for Metamarkets, told us:

A key part of our analytics platform is Druid, our open source datastore that’s built to analyze tens of billions of records in under a second. For certain query types, R3 instances help us reduce Druid’s median query time by nearly 50%. That means a better experience for our clients, who rely on us to deliver insights right when they need them.

Partner SupportMany APN (Amazon Partner Network) Technology Members are working to make their offerings available on the R3 instances. Here's a sampling:

Many organizations face the need to move transactional data from one location to another location. As organizations continue to make the cloud a central part of their overall IT architecture, this need seems to grow in tandem with the size, scope, and complexity of the organization. The application use cases range from migrating data from a master transactional database to a readable secondary database, or moving applications from on-premises to the cloud, or having a redundant copy in another data center location. Transactions that are generated and stored within a database run by one application may need to be copied over so that it can be processed, analyzed, and aggregated in a central location.

In many cases, one part of the organization has moved to a cloud-based data storage model that's powered by the Amazon Relational Database Service (RDS). With support for the four most popular relational databases (Oracle, MySQL, SQL Server, and PostgreSQL), RDS has been adopted by organizations of all shapes and sizes. Users of Amazon RDS love the fact that it takes care of many important yet tedious deployment, maintenance, and backup tasks that are traditionally part and parcel of an on-premises database.

Oracle GoldenGate Today we are giving RDS Oracle customers the ability to use Oracle GoldenGate with Amazon RDS. Your RDS Oracle Database Instances can be used as the source or the target of GoldenGate-powered replication operations.

Oracle GoldenGate can collect, replicate, and manage transactional data between a pair of Oracle databases. These databases can be hosted on-premises or in the AWS cloud. If both databases are in the AWS cloud, they can be in the same Region or in different Regions. The cloud-based databases can be RDS DB Instances or Amazon EC2 Instances that are running a supported version of Oracle Database. In other words, you have a lot of flexibility! Here are four example scenarios:

On-premises database to RDS DB Instance.

RDS DB Instance to RDS DB Instance.

EC2-hosted database to RDS DB Instance.

Cross-region replication from one RDS DB Instance to another RDS DB Instance.

You can also use GoldenGate for Amazon RDS to upgrade to a new major version of Oracle.

Getting StartedAs you can see from the scenarios listed above, you will need to run the GoldenGate Hub on an EC2 Instance. This instance must have sufficient processing power, storage, and RAM to handle the anticipated transaction volume. Supplemental logging must be enabled for the source database and it must retain archived redo logs. The source and target database need user accounts for the GoldenGate user, along with a very specific set of privileges.

After everything has been configured, you will use the Extract and Replicat utilities provided by Oracle GoldenGate.

At last week's AWS Summit in San Francisco, Senior VP Andy Jassy announced the forthcoming R3 instance type (watch Andy's presentation), and presented a map to illustrate the choices:

I'd like to provide you with some additional technical and pricing information so that you can start thinking about how you will put this powerful new instance to work.

Soon to be available in five instance sizes, this instance type is recommended for applications that require high memory performance at the best price point per GiB of RAM. The R3 instances include the following features:

The R3 instances will be available in five sizes, as follows (prices are in US East (Northern Virginia); see the EC2 pricing page for full information):

Instance Name

vCPU Count

RAM

Instance Storage (SSD)

Price/Hour

r3.large

2

15 GiB

1 x 32 GB

$0.175

r3.xlarge

4

30.5 GiB

1 x 80 GB

$0.350

r3.2xlarge

8

61 GiB

1 x 160 GB

$0.700

r3.4xlarge

16

122 GiB

1 x 320 GB

$1.400

r3.8xlarge

32

244 GiB

2 x 320 GB

$2.800

You will be able to launch the r3.xlarge, r3.2xlarge, and r3.4xlarge instances in EBS-Optimized form, with additional, dedicated I/O capacity for EBS volumes. The r3.8xlarge instance features 10 Gigabit networking.

Stay tuned to this blog, or follow me on Twitter and you'll be among the first to know when you can start launching some R3 instances.

We release new versions of the Amazon Linux AMI every six months after a public testing phase that includes one or more Release Candidates. The Release Candidates are announced in the EC2 forum and are available to all EC2 users.

Launch TimeToday marks the release of the 2014.03 Amazon Linux AMI, which is available in both PV and HVM mode, with both EBS-backed and Instance Store-backed AMIs. The Amazon Linux AMI is supported on all EC2 instance types.

You can launch this new version of the AMI in the usual ways. You can also upgrade existing EC2 instances by running yum update and rebooting your instance.

Updates & New Features The Amazon Linux AMI was designed to provide a stable, secure, and high performance execution environment for applications running on EC2.

Going Going GoneThis release marks the third anniversary of the launch of the Amazon Linux AMI. We are now starting to make plans to deprecate and ultimately remove some of the older packages. Check the release notes for more information about our plans in this area.

Choosing AlternativesAs you can see from the list of updates and new features, the Amazon Linux AMI incorporates multiple versions of a number of important packages. The Alternatives package is part of the AMI and can be used to switch between versions. Under the covers, this command uses symbolic links to effect a system-wide change that will persist across reboots.

To show you how to do this, I installed four separate versions of GCC on my instance. I can switch between them using the command alternatives --config gcc.. The command lists the available versions and allows me to make a change by selecting the desired command:

The new version of the Amazon Linux AMI is available today in all of the public AWS Regions.

It is always fun to write about price reductions. I enjoy knowing that our customers will find AWS to be an even better value over time as we work on their behalf to make AWS more and more cost-effective over time. If you've been reading this blog for an extended period of time you know that we reduce prices on our services from time to time, and today’s announcement serves as the 42nd price reduction since 2008.

We're more than happy to continue this tradition with our latest price reduction.

Amazon EC2 Price ReductionsWe are reducing prices for On-Demand instance as shown below. Note that these changes will automatically be applied to your AWS bill with no additional action required on your part.

Instance Type

Linux / Unix Price Reduction

Microsoft Windows Price Reduction

M1, M2, C1

10-40%

7-35%

C3

30%

19%

M3

38%

24-27%

We are reducing the prices for Reserved Instances as well for all new purchases. With today’s announcement, you can save up to 45% with on a 1 year RI and 60% on a 3 year RI relative to the On-Demand price. Here are the details:

Instance Type

Linux / Unix Price Reduction

Microsoft Windows Price Reduction

1 Year

3 Year

1 Year

3 Year

M1, M2, C1

10%-40%

10%-40%

Up to 23%

Up to 20%

C3

30%

30%

Up to 16%

Up to 13%

M3

30%

30%

Up to 18%

Up to 15%

Also keep in mind that as you scale your footprint of EC2 Reserved Instances, that you will benefit from the Reserved Instance volume discount tiers, increasing your overall discount over On-Demand by up to 68%.

Amazon S3 Price ReductionsWe are reducing prices for Standard and Reduced Redundancy Storage, by an average of 51%. The price reductions in the individual S3 pricing tiers range from 36% to 65%, as follows:

Tier

New S3 Price / GB / Month

Price Reduction

0-1 TB

$0.0300

65%

1-50 TB

$0.0295

61%

50-500 TB

$0.0290

52%

500-1000 TB

$0.0285

48%

1000-5000 TB

$0.0280

45%

5000 TB or More

$0.0275

36%

These prices are for the US Standard Region; consult the S3 Price Reduction page for more information on pricing in the other AWS Regions.

Amazon RDS Price ReductionsWe are reducing prices for Amazon RDS DB Instances by an average of 28%. There's more information on the RDS Price Reduction page, including pricing for Reserved Instances and Multi-AZ deployments of Amazon RDS.

Amazon ElastiCache Price ReductionsWe are reducing prices for Amazon ElasticCache cache nodes by an average of 34%. Check out the ElastiCache Price Reduction page for more information.

Amazon Elastic MapReduce Price ReductionsWe are reducing prices for Elastic MapReduce by 27% to 61%. Note that this is addition to the EC2 price reductions described above. Here are the details:

Instance Type

EMR Price Before Change

New EMR Price

Reduction

m1.small

$0.015

$0.011

27%

m1.medium

$0.03

$0.022

27%

m1.large

$0.06

$0.044

27%

m1.xlarge

$0.12

$0.088

27%

cc2.8xlarge

$0.50

$0.270

46%

cg1.4xlarge

$0.42

$0.270

36%

m2.xlarge

$0.09

$0.062

32%

m2.2xlarge

$0.21

$0.123

41%

m2.4xlarge

$0.42

$0.246

41%

hs1.8xlarge

$0.69

$0.270

61%

hi1.4xlarge

$0.47

$0.270

43%

With this price reduction, you can now run a large Hadoop cluster using the hs1.8xlarge instance for less than $1000 per Terabyte per year (this includes both the EC2 and the Elastic MapReduce costs).

We've often talked about the benefits that AWS's scale and focus creates for our customers. Our ability to lower prices again now is an example of this principle at work.

It might be useful for you to remember that an added advantage of using AWS services such as Amazon S3 and Amazon EC2 over using your own on-premises solution is that with AWS, the price reductions that we regularly roll out apply not only to any new storage that you might add but also to the existing data that you have already stored in AWS. With no action on your part, your cost to store existing data goes down over time.

Once again, all of these price reductions go in to effect on April 1, 2014 and will be applied automatically.

The Amazon Virtual Private Cloud (VPC) gives you the power to create a logically isolated section of the AWS Cloud, which you can think of as virtual network. You can launch AWS resources, including Amazon EC2 instances within the network, and you have full control over the virtual networking environment, including the IP address range and the subnet model. You also have full control over network routing, both within the VPC (using route tables) and between networks (using network gateways).

VPC PeeringToday we are making the VPC model even more flexible! You now have the ability to create a VPC peering connection between VPCs in the same AWS Region. Once established, EC2 instances in the peered VPCs can communicate with each other across the peering connection using their private IP addresses, just as if they were within the same network.

You can create a peering connection between two of your own VPCs, or with a VPC in another AWS account.A VPC can have one-to-one peering connections with up to 50 other VPCs in the same Region.

VPC peering enables a number of interesting use cases; let's take a look at a couple of them.

Within a single organization, you can set up peering relationships between VPCs that are run by different departments. One VPC can encompass resources that are shared across an entire organization, with additional, per-department VPCs for resources that are peculiar to the department. Here's a very simple example:

After you set up the peering connections and add entries to the routing tables (to direct packets out of one VPC and into another), the EC2 instances in the Accounting VPC can access the Shared Resources VPC, as can the instances in the Engineering VPC. However, the Accounting instances cannot access the Engineering instances, or vice versa. Peering connections are not transitive; you would need to set up a peering connection between Engineering and Accounting in order to establish connectivity. Think about extending this model with an Operations VPC that is peered with all of the other VPCs in your organization.

As I mentioned earlier, you can also establish VPC peering between a pair of VPCs that are owned by different accounts. Suppose your organization is a member of an industry consortium or a party to a joint venture. You can use VPC peering to share common resources between members of the consortium or other joint venture, all within AWS and with full control of the networking topology:

As was the case in the previous scenario, each participant in the consortium will be able to see and access the shared resources, but not those of the other participants. We've documented a number of common peering scenarios in our VPC Peering Guide.

Peering DetailsI'm going to show you just how easy it is to create a VPC peering connection in just a minute. Before I do that, I'd like to review the rules that govern the use of this very powerful new feature.

You can connect any two VPCs that are in the same AWS Region, regardless of ownership, as long as both parties agree. We plan to extend this feature to support cross-Region peering in the future. Connections are requested by sending an invitation from one VPC to the other. The invitation must be accepted in order to establish the connection. Needless to say, you should only accept invitations from VPCs that you know. You are free to ignore unwanted or incorrect invitations; they'll expire before too long.

The VPCs to be peered must have non-overlapping CIDR blocks. This is to ensure that all of the private IP addresses are unique, allowing direct access (as allowed by the peering and routing tables) without the need for any form of network address translation.

As you can see from the scenarios that I described above, VPC peering connections do not generate transitive trust. Just because A is peered with B and B is peered with C, it doesn't mean that A is peered with C.

The connections are implemented within the VPC fabric; this avoids single points of failure and bandwidth bottlenecks.

There is no charge for setting up or running a VPC peering connection. Data transferred across peering connections is charged at $0.01/GB for send and receive, regardless of the Availability Zones involved.

VPC Peering ExampleI used the AWS Management Console to set up a VPC peering connection between two of my VPCs, which were named corporate-vpc and branch-east-vpc. Here are the IDs and the CIDRs:

Before I go any further, I should note that these features are available in the "Preview" version of the VPC console. In addition to support for the creation and management of VPC peering connections, the new console includes a multitude of tagging features to simplify and enhance your VPC management operations.

I clicked on Peering Connections in the VPC Dashboard, selected corporate-vpc, and then used the Create VPC Peering Connection button to invite branch-east-vpc to peer:

The invite appeared in the list of connections. I selected it and clicked Accept:

The peering connection was created and became visible immediately:

Then I created an entry in the route table of each VPC. As you can see, the console provided me with a helpful popup when it was time for me to choose the Target for the route:

Peer NowThe new VPC peering feature is available now and you can start using it today. I am very interested in seeing how this feature is put to use. Leave me a comment and let me know what you think!

We launched Amazon S3 on March 14, 2006 with a press release and a simple blog post. We knew that the developer community was interested in and hungry for powerful, scalable, and useful web services and we were eager to see how they would respond.

S3 and the Amazon ValuesAlmost every company has a mission statement of some kind. At Amazon.com, we are guided by our Leadership Principles. We use these principles as part of the interviewing process, and revisit them during our annual reviews. I thought back to the launch of S3 and the long string of additional features that we have added to it since then, and tried to match them up to some of the leadership principles.

Customer Obsession - Before we wrote a line of code, we talked to lots of potential customers so that we could have a good understanding of the features that they would like to have in an Internet-scale storage service. We talked to individuals and groups within the company, and to outside developers. The listening process didn't stop when S3 launched. We talk to customers every day and we do our best to listen, understand, and to respond.

Invent and Simplify - True innovation calls for a lot of difficult decisions. The innovator must decide what the product is, and what it is not. We were breaking new ground when we were designing and building S3, and had to figure out how to handle identity, authentication, billing, security, and hundreds of other issues before we could launch the product.

Are Right, A Lot - The first time I heard about S3 internally, I was told that we were building "Malloc for the Internet." As a long-time C programmer, I knew exactly what this meant. Malloc is a very basic C library function — it allocates the requested amount of memory and returns a pointer to it. It is a simple building block for more complex forms of memory management. Equating S3 to Malloc was a key insight, and one that served as a guiding principle when making those early (and crucial) design decisions. Moving forward, we continually remind ourselves that the first "S" in S3 stands for "Simple."

Think Big - Because S3 was designed for the Internet, we had to make sure that the architecture and the implementation contained no intrinsic limits. Today, with trillions of objects stored and an access rate of over one million requests per second, we continue to look to the future, with a well-tuned model that allows us to forecast, plan for, and accommodate the never-ending inflow of new data. Like Malloc, S3 is a dependable architectural component. Amazon EC2, Elastic MapReduce, Elastic Block Store, Amazon Glacier, CloudTrail, Redshift, the Relational Database Service, and other services all make use of S3 for object-style storage.

Earn Trust of Others - It is kind of fun to be in a crowded elevator at a tech conference. The conference attendees talk about S3 very casually, and take its scale, durability, and cost-effectiveness pretty much for granted. I often hear them say things like "Just throw it into S3 and stop worrying about it." S3 has become, as we envisioned at design time, the de facto storage system for the Internet.

AWS at AirbnbAccommodation booking site Airbnb has been on AWS since they launched. They now use a wide variety of services including S3, EC2, the Relational Database Service (RDS), Route 53, ElastiCache, Redshift, and DynamoDB. With 9 million customers, 1000 EC2 instances, 2 billion rows of data in RDS, and 50 terabytes of photos stored in S3, Airbnb is run by an operations team of just five people.

In the video below, Airbnb VP of Engineering Mike Curtis talks about the benefits that they have seen from using AWS:

Onward and Upward!Eight years down the road from the launch of S3, I remain as excited as ever about the future of AWS and of cloud computing. I have written and published 1,645 posts since the launch of S3 (972,246 words, but who's counting?) and am doing my best to keep up with all of cool stuff that our teams build. Stay tuned for the next eight years and you won't be disappointed!

The VM Import/Export feature gives you the power to import existing virtual machine images to Amazon EC2 instances and to export them back to your on-premises environment. You can move images to hasten and simplify your migration from on-premises to the AWS cloud or as part of a disaster recovery model.

AWS will provide the appropriate Microsoft Windows Server license key for the imported image. Your on-premises key will not be used in the cloud and you are free to use it for other Windows Server images that are still running in your on-premises environment.

The EC2 documentation contains complete information on the steps that you need to take to perform an import or export operation.

Windows Import EnhancementsIn addition to adding support for Windows Server 2012, VM Import has also made a few improvements to the import process for customers importing Windows Server 2003 and Windows 2008 images. Amazon EC2 instances created from Windows VMs will now benefit from having EC2Config installed by default and from having the latest-generation Citrix PV drivers.

Start TodaySupport for Windows Server 2012 is available now and you can start using it today.

You can also import Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, Red Hat Enterprise Linux, CentOS, Ubuntu, and Debian images; see the VM Import Prerequisites and Before You Get Started section of the documentation for additional information.

The AWS GovCloud (US) Region is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements.

Red Hat Enterprise Linux was designed for secure, enterprise computing. The Security-Enhanced Linux (SELinux) capabilities found in RHEL have fostered adoption across many agencies of the United States government.

With a total of 15 Common Criteria certifications across four hardware platforms, RHEL is one of the industry's most certified operating systems. Today's launch of RHEL in AWS GovCloud (US) means that government users can now standardize on a single operating system for on-premises and cloud-based deployments.

Update: I have received several questrions about the ITAR restriction shown in the image above. Here's some background information:

Red Hat Enterprise Linux customers on AWS GovCloud (US) receive full support from Amazon Web Services, which is backed by Red Hat's award-winning Global Support Services. Because Red Hat is not currently equipped to accept export controlled materials (ITAR), Amazon Web Services and Red Hat customers must verify that any data provided to Red Hat is compliant with any export controls that may apply. For more information, please see https://access.redhat.com/site/solutions/748633.