Amazon's new S3 service is available on a pay-as-you-go basis. This is commonly called utility computing; it is totally analogous to the way in which you pay for the water, electricity, and natural gas that you use in your home.

With this post I would like to make clear just how easy it is to use this type of web service. I'm writing this because some of the developers that I talk to seem to think that it really has to be harder than it really is, and I want to correct that notion as soon as possible.

To use services like S3 you don't need to call us, you don't need to set up a meeting, you don't need to enter into any negotiating sessions with us, you don't need to start writing custom contracts, and you don't need to send us any money up-front. We designed this service and all of the infrastructure around it to be self-serve, straightforward, and trouble-free.

Let's look at registration, service signup, reporting, and billing in turn.

On the registration side, you start by creating a free Amazon Web Services Developer Account, which you can do here. As part of this process you will be asked to read and signify your acceptance of the license agreement. When you read the agreement, you should also take care to read any of the sections which are pecular to one particular service. After you have signed up, click on the "Your Web Services Account" button to access some important information:

Clicking "View Access Key Identifiers" to see your Access Key ID. You must pass this parameter to us as part of any request to any of our services. You will also see your Secret Access Key on the same page. You will never be asked to divulge this key, and you should never pass it to us as part of any request. Instead, you will use this key as part of a signing process; this process authenticates your requests so that we know that you (and not some other developer) are in fact making them.

At this point you are a registered AWS developer, and you have agreed to the license, but you are under no financial obligation to us.

In order to sign up for S3, you simply visit the S3 page, and then click "Sign up for Web Service":

From here you confirm your acceptance of the pricing for the service, confirm your credit card and billing address (or enter a new one), and then sign up (this picture shows me signing up for a different service; I had to do that in order to take the screen shot because I had signed up for S3 already):

Now comes the fun part, building your application using our web services APIs. As soon as you start to make calls to the services you've signed up for, we'll be tracking your usage.

Ok, so what about reporting, you ask? You can view your usage at any time and on a per-service basis using the "Usage Report" option:

After you select this option you will be prompted for some additional information. You can download the report in XML or CSV (Comma-separated variable format), and the date range.

I chose to download the data in CVS form, and then stuffed it into an Excel worksheet. Here's what it looks like:

We also put together a billing statement each month; you'll get an email reminder when we do this. Here's what the statement looks like:

And that's about it! This is what utility computing is supposed to be all about: sign up, use it, and pay, without having to go through a lot of hassles or complications to get started or to pay for the service.

Developer activity around S3 continues at a rapid pace, and I already have enough material for another S3 roundup. Here goes:

Doug Kaye, who interviewed me several years ago for IT Conversations, asks about the use of S3 as a CDN, or content delivery network. Great idea, and I will pass this post along to the S3 development team.

Dave Winer makes it clear that we need to do a lot of idea sharing in order to realize the full potential of S3. There's plenty of sharing happening already on the S3 discussion forum, but there's room for more.

Frucall allows consumers to get pricing and customer rating information about
items they see in store by making a call from their cell phone.

To use the service, simply dial 1-888-FRU-SHOP (1-888-378-7467) and enter an ISBN, EIN, or a 12-digit barcode value. Frucall will tell you the new and best used price for the product, including estimated shipping costs for both. From that point you can choose to get more information about other marketplace listings for the product, buy the product, or simply bookmark it for later. In the future you will also be able to get recommendations.

A few minutes of surfing, a scan of the S3 forum, a couple of Technorati and PubSub alerts, and a del.icio.us S3 tag brought all of the following cool S3 activity to my attention:

Dave Winer wants to have a Bay Area S3 conference. That's an interesting idea, and I'll see what we can do to support it. Dave is already storing data in S3; check out the newer images on Scripting.com and you'll see that they are served up by S3.

The Mission Data blog reports on their effort to move their data over to S3, using some modification to the Ruby samples to support streaming of data, as reported here.

Dominic Da Silva released version 1.0 of his #Sh3ll ("Sharp Shell"), and (cleverly enough) made the downloaded bits available in S3. More details in this forum post. He's also released version 1.0 of jSh3ll, a Java version of this application.

Matt Croydon talks about using S3 to back up Flickr photos and says "After uploading 160 or so photos to Amazon, I owe them about a penny."

One of the commenters to the blog was surprised to find out that the objects in S3 are URL-addressible. This is absolutely the case, and is one of the very cool aspects of S3.

In fact, the image at right is stored within S3; I put it there using the S3 Perl / Curl sample.

You can verify this by right-clicking on the image and inspecting its properties. The image's URL is http://s3.amazonaws.com/aws_blog/images/chicago_crime.png . In this case, aws_blog identifies the S3 bucket, and images/chicago_crime.png identifies the object (the image) within the bucket.

When I stored the image into S3 I set the content-type to "image/png" so that the browser would know that it was in fact an image. When S3 processes the HTTP GET from the browser, it returns the content type (along with any other S3 metadata attached to the object) using HTTP headers. Here's what it returned for the image:

As you can see, it would be easy to use S3's metadata facility to store exta information about each object. Once information is stored in S3, you can get it back without retrieving the entire object using a simple and efficient HTTP HEAD request.

In an article in today's New York Times, writer John Markoff says "The Internet is entering its Lego era." John talks about the rise in componentized software, the rapid pace of development, and the all-important fact that a lot of the innovation is now taking place in small companies and by small, distributed groups of developers working in their spare time. He notes that the cost of development is now so low that traditional venture financing is no longer a prerequisite to building a great product, and further extrapolates that this model may actually obviate the need to outsource development work to low cost producers.

John talks about Amazon's S3 and about the Mechanical Turk, and about the power that simple protocols like REST (and JSON too, though he didn't mention it) have to stitch the web's components together, in what Dion Hinchcliffe is now calling a web-oriented architecture, or WOA.