We launched Amazon EC2 with a single instance type (the venerable m1.small) in 2006. Over the years we have added many new instance types in order to allow our customers to run a very wide variety of applications and workloads.

The Second Generation Standard InstancesToday we are continuing that practice, with the addition of a second generation to the Standard family of instances. These instances have the same CPU to memory ratio as the existing Standard instances. With up to 50% higher absolute CPU performance, these instances are optimized for applications such as media encoding, batch processing, caching, and web serving.

There are two second generation Standard instance types, both of which are 64-bit platforms:

The Double Extra Large Instance (m3.2xlarge) has 30 GB of memory and 26 ECU spread across 8 virtual cores, with high I/O performance.

The instances are now available in the US East (Northern Virginia) region; we plan to support them in the other regions in early 2013.

On Demand pricing in the region for an instance running Linux starts at $0.58 (Extra Large) and $1.16 (Double Extra Large). Reserved Instances are available, and the instances can also be found on the EC2 Spot Market.

Price ReductionsAs part of this launch, we are reducing prices for the first generation Standard (m1) instances running Linux in the US East (Northern Virginia) and US West (Oregon) regions by over 18% as follows:

Instance Type

New On Demand Price

Old On Demand Price

Small

$0.065/hour

$0.08/hour

Medium

$0.13/hour

$0.16/hour

Large

$0.26/hour

$0.32/hour

Extra Large

$0.52/hour

$0.64/hour

There are no changes to the Reserved Instance or Windows pricing.

Meet the FamilyWith the launch of the m3 Standard instances, you can now choose from seventeen instance types across seven families. Let's recap just so that you are aware of all of your options (details here):

The first (m1) and second (m3) generation Standard (1.7 GB to 30 GB of memory) instances are well suited to most applications. The m3 instances are for applications that can benefit from higher CPU performance than offered by the m1 instances.

The Micro instance (613 MB of memory) is great for lower throughput applications and web sites.

The High Memory instances (17.1 to 68.4 GB of memory) are designed for memory-bound applications, including databases and memory caches.

The High-CPU instances (1.7 to 7 GB of memory) are designed for scaled-out compute-intensive applications, with a higher ratio of CPU relative to memory.

The Cluster Compute instances (23 to 60.5 GB of memory) are designed for compute-intensive applications that require high-performance networking.

The Cluster GPU instances (22 GB of memory) are designed for compute and network-intensive workloads that can also make use of a GPGPU (general purpose graphics processing unit) for highly parallelized processing.

With this wide variety of instance types at your fingertips, you might want to think about benchmarking each component of your application on every applicable instance type in order to find the one that gives you the best performance and the best value.

New Contest Close Date – December 5, 2012We are extending the close date of the contest to Wednesday December 5, 2012, instead of Friday November 9, 2012. We hope this will give you more time to complete your application and submit before the deadline.

If you have already submitted your application, you can go back and modify your entry through YouNoodle. To do this, you’ll need to unclick the ‘submit’ button, make your changes, and then submit the entry form again. Please email awsstartups@amazon.com if you experience any issues while trying to update your entry.

New Judging TimelineAs we have changed the contest close date to December 5, this means that our internal judging timeline has also changed:

First Round Judging: We will review applications for the Challenge in mid-December, and announce the Semi-Finalists in late December.

Second Round Judging: We will review the Semi-Finalist applications in late December, and announce the Finalists in early January.

Detailed information about the new judging timeline can be found on the Official Rules page on the AWS Website.

Final Judging Event – San Francisco, CAThe final judging round will take place on January 23-24, 2013 at the W hotel in San Francisco, California. 12 finalists will be flown from their place of residence to participate in the final judging round, where they will present to the Amazon Web Services Executive Team, as well as rep resentatives from VC firms such as Sequoia, First Round Capital and Madrona.

After the final judging round, we will be hosting a finale event at DogPatch Wine Works in San Francisco, CA where we will announce the 4 Grand Prize Winners. Here, the 12 finalists will get to present to a large audience of start-ups, entrepreneurs and like-minded business professionals. This event is open to everyone to attend; registration for this event will be announced shortly. I look forward to meeting with you who are able to join us.

Next StepsDon’t miss your chance to enter this year. Learn more about this year’s Challenge by visiting http://aws.amazon.com/startupchallenge . The deadline to enter is 11:59:59 P.M. (PT) on Wednesday December 5, 2012.

If you have more questions about the contest or your entry application, please refer to the Official Rules, Frequently Asked Questions page, or email awsstartups@amazon.com .

I have been having discussions related to the Total Cost of Ownership (TCO) of running workloads in the cloud as it compares to running them on-premises. One consistent conclusion is that weighing the financial considerations of owning and operating a data center or co-located facility versus employing a cloud infrastructure or a cloud service requires detailed and careful analysis. The TCO is often the financial metric that is used to estimate and compare direct and indirect costs of a product or a service. It typically includes the actual costs of procurement, management, maintenance and decommissioning of hardware resources over their useful life (which is typically a 3 or 5 year period). Given the plethora of different hardware configurations available today, it sometimes becomes difficult to know the actual costs and come up with an accurate TCO model that represents the true cost of running your application.

I’m also hearing from customers that it can be challenging for them to do the right apples-to-apples comparisons between on-premises infrastructure and an infrastructure that is offered as a service. In practice, it is not as simple as just measuring potential hardware expense alongside utility pricing for compute and storage resources (read our TCO Whitepaper). We have noticed that customers struggle to compare the two models especially when they are trying to compare the TCO of a web application scenario that includes compute, storage, network access, load balancing and all the complements of the architecture.

I am very happy to announce the new TCO Calculator for Web Applications. This tool should help anyone with even a base level of familiarity with infrastructure to generate a fact-based apples-to-apples TCO comparison for on-premises and AWS infrastructure. The tool is simple to use and allows you to adjust your assumptions based on your current data center setup. In three easy steps, it provides both a summary and a comprehensive detailed report, with an FAQ that explains assumptions and methods used.

In order to provide an objective analysis, the tool uses a combination of data points from analyst research, and analysis of data from hundreds of customers by both AWS and AWS partners. The calculator also provides user-adjusted variables in order to ensure further objectivity. The calculator was commissioned to be built by 2nd Watch, an AWS Advanced Partner who leveraged their own wealth of customer data points and experience in operating both data center and AWS infrastructure.

Real ExampleLet's take a look at an example to help us understand how this works. Let’s assume that you wanted to refresh hardware and had to renew a contract with your existing co-lo facility and were evaluating whether AWS cloud will be more cost effective or not in a long run

Let’s say that following is your existing or planned on-premises infrastructure specification:

The beauty of this tool is that you will be able to input exactly this configuration and adjust your assumptions. Whether you use it by just answering seven simple questions, or you spend more time with it to model more detailed scenarios, it can save you lots of effort while providing comprehensive comparisons.

This web-based calculator is currently
in beta and is currently optimized for the Web Application use case, although
it can be used to model other infrastructure setups. As always, we are looking
for feedback, suggestions and comments and hope to roll out more use cases and improve it over time.

Driving
costs down for our customers is part of the DNA of Amazon and therefore also
part of the DNA of AWS. We’re seeing how customers from startups with hyper
growth to large organizations with stable workloads are able to leverage our
low prices, usage based billing, tiered pricing, and variety of purchasing
options to continuously lower their cost of acquiring and operating
applications. You can find more information, more whitepapers, tools and
specific case studies on our Economics Center at http://aws.amazon.com/economics

In this episode of The AWS Report, I spoke with Matt Lull, Managing Director, Global Strategic Alliances, for Citrix to learn more about their cloud strategy. We talked about their line of virtualization products including Xen, XenServer, CloudBridge, and the Citrix NetScaler.

After that we talked about the concept of desktop virtualization, and Matt told me "Work isn't a place you go anymore, it is a thing that you do." From there we wrapped up with a discussion about AWS re:Invent.

The AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage.

We launched the Storage Gateway earlier this year. The initial release supported on-premises iSCSI volume storage (what we call Gateway-Stored volumes), with snapshot backups to the cloud. Volume data stored locally is pushed to Amazon S3, where it is stored in redundant, encrypted form and made available in the form of Elastic Block Storage (EBS) snapshots. When you use this model, the on-premises storage is primary, delivering low-latency access to your entire dataset, and the cloud storage is the backup. We’ve seen great pickup of the gateway during the beta, with many customers using the service for cost-effective, durable off-site backup.

We are now adding support for Gateway-Cached volumes. With Gateway-Cached volumes, your storage volume data is stored encrypted in Amazon S3, visible within your enterprise's network via an iSCSI interface. Recently accessed data is cached on-premises for low-latency local access. You get low-latency access to your active working set, and seamless access to your entire data set stored in Amazon S3.

Each Gateway-Cached volume can store up to 32 TB of data and you can create multiple volumes on each gateway. Cloud storage is consumed only as data is actually written to the volume, and you pay only for what you use. This means that you can use the Gateway-Cached volumes to economically store data sets that grow in size over time, without having to scale your on-premises storage infrastructure. Corporate directory trees, home directories, backup application data, and email archives are often well-suited to this model. Gateway-Cached volumes also provide the ability to take point-in-time snapshots of your volumes in Amazon S3, which you can use to store prior versions of your data. These snapshots are stored as Amazon EBS snapshots.

Here's a diagram to put all of the pieces together:

You can create and configure new volumes through the AWS Management Console. In addition to specifying the size of each new volume, you also have control over two types of on-premises storage: the upload buffer and the cache storage. Upload buffer is used to buffer your writes to Amazon S3. Cache storage holds your volumes’ recently accessed data. While the optimal size for each will vary based on your data access pattern, we generally recommend that you have an upload buffer that's large enough to hold one day's worth of changed data. If you create an 8 TB volume and change about 5% of it each day, a 400 GB upload buffer should do the trick. The cache storage should be large enough to store your active working set of data and at least as big as the upload buffer.

We are also taking this opportunity to promote the AWS Storage Gateway to General Availability. You can use it to support a number of important data storage scenarios like corporate file sharing, backup, and DR in a manner that seamlessly integrates local and cloud storage.

We'll be running a free Storage Gateway webinar on December 5th, 2012. You'll learn how to use the AWS Storage Gateway to backup
your data to Amazon S3. You’ll also
learn how you can seamlessly store your corporate file shares on Amazon S3,
while keeping copies of frequently-accessed files on-premises.

You can get started with the AWS Storage Gateway by taking advantage of our free 60-day trial. After that, there is a charge of $125/month for each activated gateway. Pricing for Gateway-Cached storage starts at $0.125 per gigabyte per month. Register for your free trial and get started today!

-- Jeff;

PS - The Storage Gateway team is looking for smart software development engineers at all levels of experience. You will get the opportunity to develop a cloud-based storage service that is changing storage for enterprise customers. Join a team that is smart, driven to serve customers, loves to tackle hard problems and fun to work with, in a start-up like environment. Links to some of our positions are below:

For today's episode of the AWS Report, I spoke to Michael Crandell, co-founder and CEO of RightScale. RightScale was founded shortly after Amazon S3 and Amazon EC2 were introduced, with the goal of taking advantage of pay-as-you-go infrastructure as service.

In this four minute video, Michael describe's RightScale's role as Cloud Management Platform (CMP). He also talks about how they have expanded their focus from helping companies to deal with flash crowds to a broader practice serving enterprise customers.

We chatted about their acquisition of PlanForCloud, and wrapped things up in front of a Space Needle almost fully obscured by fog.

RightScale is a Platinum sponsor of AWS re:Invent. They will be demonstrating their products in the exhibition hall. RightScale co-founder Thorsten von Eicken will speak on the topic of "The Convergence of IaaS and PaaS."

In this episode of The AWS Report, I spoke with AWS Evangelist Matt Wood to learn about his track on Big Data and Analytics at AWS re:Invent. The track sounds awesome and I hope to be able to attend some of the sessions:

The micro instances provide a small amount of consistent CPU power, along with the ability to increase it in short burst when additional cycles are available. They are a good match for lower throughput applications and web sites that require additional compute cycles from time to time.

With this release, you now have everything that you need to create and experiment with your very own Virtual Private Cloud at no cost. This is pretty cool and I'm sure you'll make good use of it.

Today, SAP announced HANA One, a deployment option for HANA that is certified for production use on AWS available now in the AWS Marketplace. You can run this powerful, in-memory database on EC2 for just $0.99 per hour.

Because you can now launch HANA in the cloud, you don't need to spend time negotiating an enterprise agreement, and you don't have to buy a big server. If you are running your startup from a cafe or commanding your enterprise from a glass tower, you get the same deal. No long-term commitment and easy access to HANA, on an hourly, pay-as-you-go basis, charged through your AWS account.

What's HANA?SAP HANA is an in-memory data platform well suited for performing real-time analytics, and developing and deploying real-time applications.

I spent some time watching the videos on the Experience HANA site as I was getting ready to write this post. SAP founder Hasso Plattner described the process that led to the creation of HANA, starting with a decision to build a new enterprise database in December of 2006. He explained that he wanted to capitalize on two industry trends -- the availability of multi-core CPUs and the growth in the amount of RAM per system. Along with this, he wanted to exploit parallelism within the confines of a single application. Here's what they came up with:

Putting it all together, SAP HANA runs entirely in memory, eschewing spinning disk entirely except for backup. Traditional disk-based data management solutions are optimized for transactional or analytic processing, but not both. Transactional processing is oriented around and optimized for row-base operations: inserts, updates, and deletes. In contrast, analytic processing is tuned for complex queries, often involving subsets of the columns in a particular table (hence the rise of column-oriented databases). All of this specialization and optimization is needed due to the fact that accessing data stored on a disk is 10,000 to 1,000,000 times slower than accessing data stored in memory. In addition to this bottleneck, disk-based systems are unable to take full advantage of multi-core CPUs.

At the base, SAP HANA is a complete, ACID-compliant relational database with support for most of SQL-92. At the top, you'll find an analytical interface using Multi-Dimensional Expressions (MDX) and support for SAP BusinessObjects. Between the two is a parallel data flow computing engine designed to scale across cores. HANA also includes a Business Function Library, a Predictive Analysis Library, and the "L" imperative language.

So, what is HANA good for? Great question! Here are some applications:

The folks at Taulia are building a dynamic discounting platform around HANA One. They're already using AWS to streamline their deployment and operations; HANA One will allow them to make their platform even more responsive.

This is an enterprise-class product (but one that's accessible to everyone) and I've barely scratched the surface. You can read this white paper to learn more (you may have to give the downloaded file a ".pdf" extension in order to open it).

Deploy HANA NowAs I mentioned earlier, SAP has certified HANA for production use on AWS. You can launch it today and you can get started now.

You don't have to spend a lot of money. You don't need to buy and install high-end hardware in you data center and you don't need to license HANA. Instead, you can launch HANA from the AWS Marketplace and pay for the hardware and the software on an hourly, pay-as-you-go basis.

You'll pay $0.99 per hour to run HANA One on AWS, plus another $2.50 per hour for an EC2 Cluster Compute Eight Extra Large instance with 60.5 GB of RAM and dual Intel Xeon E5 processors, bringing the total software and hardware cost to just $3.49 per hour, plus standard AWS fees for EBS and data transfer.