The team behind Amazon SimpleDB is in the process of designing and implementing a new API call. The new call will return information about a particular SimpleDB domain. They have posted the design spec to the SimpleDB forum and would be very interested in receiving some more feedback.

I've said it before and I'll say it again -- much of what we do at Amazon is driven by real feedback from real customers. If you are using or are thinking about using SimpleDB, please take a look at the spec and let us know (via a post to the forum) what you think.

I get to meet with lots of developers and system architects as part of my job. Talking to them about cloud computing and about the Amazon Web Services is both challenging and rewarding. Cloud computing as a concept is still relatively new. When I explain what it is and what it enables, I can almost literally see the light bulbs lighting up in people's heads as they understand cloud computing and our services, and what they can do with them.

A typical audience contains a nice mix of wild-eyed enthusiasts and more conservative skeptics. The enthusiasts are ready to jump in to cloud computing with both feet, and start to make plans to move corporate assets and processes to the cloud as soon as possible. The conservative folks can appreciate the benefits of cloud computing, but would prefer to take a more careful and measured approach. When the enthusiasts and the skeptics are part of the same organization, they argue back and forth and often come up with an interesting hybrid approach.

The details vary, but a pattern is starting to emerge. The conservative side advocates keeping core business processes inside of the firewall. The enthusiasts want to run on the cloud. They argue back and forth for a while, and eventually settle on a really nice hybrid solution. In a nutshell, they plan to run the steady state business processing on existing systems, and then use the cloud for periodic or overflow processing.

After watching (sometimes in real time in the course of a meeting) this negotiation and ultimate compromise take place time and time again in the last few months, I decided to invent a new word to describe what they are doing. I could have come up with some kind of lifeless and forgettable acronym, but that's not my style. I proposed cloudbursting in a meeting a month or two ago and everyone seemed to like it.

Earlier this week my colleague Deepak Singh pointed me to a blog post written by Thomas Brox Røst. In the post, Thomas talks about how he combined traditional hosting with an EC2-powered, batch mode page regeneration system. His site (Eventseer) contains over 600,000 highly interconnected pages. As traffic and content grew, serving up the pages dynamically became prohibitively expensive. Renerating all of the pages on a single server would have taken an unacceptably long 7 days, and even longer as the site became more complex. Instead, Thomas used a cloudbursting model, regenerating the pages on an array of 25 Amazon EC2 instances in just 5 hours (or, as he notes, "roughly the cost of a pint of beer in Norway."). There's some more information about his approach on the High Scalability blog. Thomas has also written about running Django on EC2 using EBS.

I'd be interesting in hearing about more approaches to creating applications which cloudburst.

Amazon SimpleDB just released an update that includes a new feature called
QueryWithAttributes. With this update, developers will be now able to retrieve
all the information associated with items returned as a response to a particular
query. The feature provides additional flexibility because it enables you to
retrieve anywhere between one and all attributes for each item. This highly
requested feature simplifies application development process for all clients of
Amazon SimpleDB. Instead of issuing a Query request followed by a series of
GetAttributes requests, application designers can now use a single API call to
retrieve all information about items stored in Amazon SimpleDB.

I am very excited about this new feature because it simplifies my
application code. This is useful for developers who are not used to parallel
programming or who utilize programming languages that do not support parallel
programming.

The updated API documentation is here. I also highly recommend reading the Query 101, Query 102, and the best practices articles from our resource center. Amazon
SimpleDB is still in limited beta. However if you sign up for the service at aws.amazon.com/SimpleDB you’ll be the first to know when additional applications are accepted.

If you don’t list specific attributes in your query, then all attributes are returned—which is the default behavior of this new API method.

There’s a FAQ below; however I believe that examples always help developers understand what the changes mean in terms of writing code:

FAQQ: Can I use the same query language?Yes, the query language is exactly the same as for the regular Query API call.

Q: Will I get back the same set of items?Yes, the overall set of items that will match a given query expression is exactly the same as that for the regular Query API call.

Q: How many attributes can I retrieve for each item?You can retrieve anywhere between one and all attributes for each item. The default behavior is to return all attributes, but you can specify a list of specific attributes to return.

Q: Will my result set be paginated?Yes, Amazon SimpleDB paginates the result set if it exceeds specified maximum number of items or a total overall response size of 1 MB.

Q: How many items can I retrieve in one page of results?You can indicate the maximum number of items to return per page that can be between 1 and 250 (default 100). Amazon SimpleDB will attempt to return as many items as possible per page without exceeding the maximum byte size limit (1 MB) and the maximum number of items specified.

Q: What happens to page size if my attributes are very large?Your page size will likely be smaller than the maximum number of items specified, since the overall size of the response object will approach the limit of 1 MB.

Q: Will I ever get one item split across multiple pages of results?No, an item will never get split across multiple pages of results. All specified attributes for a given item will be returned within the same page of results.

Q: Does the query timeout apply to my queries?Yes, the same query timeout applies to long running queries.

Q: How much does each call cost?The cost of each call is proportional to the amount of system resources that it consumes. You can monitor the cost through the BoxUsage parameter, which is returned with every response.

Q. Is the Amazon SimpleDB Beta open to all comers now?Amazon SimpleDB is still in limited beta. However if you sign up for the service at http://aws.amazon.com/SimpleDB you’ll be the first to know when additional applications are accepted.

A few months ago I talked about our plans to offer a persistent storage feature for Amazon EC2. At that time I indicated that the service was in a limited alpha release with a small number of customers. Since then the alpha testers have been putting the service to good use and have provided us with a lot of very helpful feedback.

EBS gives you persistent, high-performance, high-availability block-level storage which you can attach to a running instance of EC2. You can format it and mount it as a file system, or you can access the raw storage directly. You can, of course, host a database on an EBS volume. In fact, Eric Hammond has already written an article, Running MySQL on Amazon EC2 with Elastic Block Store.

EBS volumes can range in size from 1 GB to 1 TB. You can mount many of them on the same instance, and even stripe (aka RAID 0) your data across them to increase performance.

The volumes can be attached to any single instance within a single EC2 availability zone. They are also automatically replicated within the zone.

During the beta you can create up to 20 EBS volumes consuming a maximum of 20 TB of space. You can make a request for additional volumes here.

You can snapshot a volume to Amazon S3 with ease, and then, if needed, create new volumes (of the same or different sizes) using the snapshot as a base. Of course, if you create a new volume with a size that doesn't match the size of the volume where you took snapshot, you will have to resize the new file system. When you create a new volume based on an S3 snapshot, the data is loaded lazily; there's no need to wait for the snapshot to load.

EBS usage is charged based on storage and on I/O requests. Storage costs $0.10 per GB per month and I/O requests cost $0.10 per million. Snapshot storage is charged at Amazon S3 rates. The AWS Simple Monthly Calculator has been updated to reflect the new features so that you can estimate your costs with ease:

All of the EBS functionality can be accessed through the EC2 APIs, through the EC2 Command Line tools, through ElasticFox, and via a number of third-party tools and libraries.

The popular ElasticFox extension for Firefox has been updated with full support for EBS. You can see all of your volumes and your snapshots on a new tab:

You can create volumes and attach them to running instances using simple dialog boxes:

You can create a snapshot with a single click:

And then create a new volume from the snapshot just as easily:

Third party tool and library support is already starting to appear. In fact, I've created a separate post, Amazon EBS - Tool and Library Support, which I will be updating a couple of times in the next day or so as announcements are made.

Also, Amazon CTO Werner Vogels has written a really good post which includes some great insights into the architectural and philosophical considerations behind our line of storage services.

This is a companion post to my earlier post -- Amazon EBS (Elastic Block Store) - Bring Us Your Data. In the other post you can read about the features of EBS. This post goes into more detail on the tool and library support that has been built by our community of third-party developers.

Earlier this year I talked about the unique and powerful AWS-powered solutions offered by Vertica and Sonian.

Tomorrow (August 21st), I will be taking part in a unique, three-party webinar. In the webinar you'll get to hear from me, from Vertica Field Engineering Director Omer Trajman, and from Sonian CTO Greg Arnette. The webinar will start at 8 AM PST.

In the webinar you will learn how cloud computing is changing the economics of data warehousing and large-scale analytic database applications. You'll hear how Sonian has built and launched a cloud-based digital content archiving system on top of Amazon EC2 and the Vertica Analytical Database for the Cloud.

The webinar is free but you do need to register ahead of time. Hope to see you there.

We’re really excited to announce our AWS Start-Up Tour again in 2008, and this year we’re adding cities to include more hotbeds of innovation. The event is focused on the interests and needs of the startup community, so if you are an entrepreneur or startup leader this is an opportunity to hear about Amazon Web Services—and hear about the real-world experiences of others who already innovate on the AWS platform.

Last year’s tour featured a number of startups, such as AideRSS, Geezeo, Renkoo, SmugMug, Slideshare, Animoto, and Ooyala—just to mention a few. You can see their presentations here. Note that “here” is Slideshare’s site, which goes to prove that not only do startups innovate on AWS; they deliver compelling utility to others. These companies are the centerpiece of each event—as you can see in their presentations, each company has a unique and creative idea. And every one of them taught me something about implementing Amazon Web Services in the real world.

One of the major value propositions of Amazon Web Services is the utility pricing plan. That is, you only pay for what you use, and the cost is very low. Sometimes it feels like I am just saying that: not because there is any doubt that it’s true; rather because it’s difficult to produce metrics to back up assertions that “low cost utility pricing” is truly a game changer.

Then it hit me… Looking at the list of Start-Up Project presentations on Slideshare’s site, I realized that not a single one of these companies is “off the air”; that is, they all are still in business. In the Startup world that is nothing short of amazing—especially in this economy. (Some of the decks on Slideshare's site are not from last year’s startup events; however even those other companies appear to be alive and well.)
Amazon can’t take all the credit for this track record; however it does seem to be a solid data point that validates the value proposition.

Amazon DevPay now has a new and very powerful feature: tiered pricing for all usage-based components of a product's price.

Using this new feature, you have more flexibility when you create the pricing plan for your product. Specifically, you can now create multiple levels, or tiers. You can create any number of tiers within your pricing plan. Pricing for each tier is based on the usage incurred by each of your customers.

Let's take a look at some of the models that you can create:

First, you can create a free usage level to make it easy for customers to give your product a try. You would set the sign-up fee and the monthly fee to zero, and then create a set of tiers. If you have a storage-based product, you could allow them to use Amazon S3 to store up to 2 GB / month for free, with a charge of $1 for each GB / month beyond that. As the business owner you would be responsible for the entire cost of S3, so you'd wan to make sure that you are providing sufficient value to ensure that your users grow from the free tier to the paid tier.

Second, you can create a model which is similar to the typical cell phone pricing plan. In this case you would charge a sign-up fee and a monthly fee, and would then include a certain amount of free usage as part of the first tier, with additional (and more costly) tiers after that. Again, with a storage-based product, you could charge $5 to get started, $5 per month for usage, and then allow up to 10 GB / month at no additional charge.

It is important to note that Amazon DevPay handles all of the nitty-gritty details associated with creating, changing, and billing your customers. You don't have to deal with partial month subscriptions, boundary conditions, or the complexities involved in changes to the prices for each tier or even to the number of tiers. You can read all about this in the documentation.