Biz & IT —

Google Storage for Developers takes on Amazon S3

Google has announced a new cloud storage platform, Google Storage for …

Google has launched a new cloud storage service competing directly with Amazon's S3. Google Storage for Developers offers scalable, high-bandwidth storage, with an easy-to-use RESTful API.

Google Storage will cost 17¢ per gigabyte per month, with uploads costing 10¢ per gigabyte and 15-30¢ per gigabyte for downloads. Initially, Google Storage will only be available to a limited number of US-based developers, with 100GB of storage and 300GB of bandwidth per month for no charge.

This announcement comes just a day after Amazon offered a cut-price version of S3, offering weaker reliability guarantees for a lower price. Amazon's Reduced Reliability storage offers 99.99 percent reliability for 10¢ per gigabyte, compared to S3's normal price of 15¢ per gigabyte. Amazon's pricing structure also offers discounts for heavy users.

Though Google has its AppEngine cloud computing platform, it has previously lacked a storage solution to go with it. As such, it was missing a key component for many Web applications, and represented a big drawback relative to Amazon's more comprehensive offerings. Google Storage is a step towards remedying this deficit, but it's going to be a while before the search giant's offerings will rival the maturity of the much more established—and cheaper—S3.

S3 and Google Storage are both key:blob stores. I don't understand why Google things they can charge more for theirs. Do they think it's faster? One of the nice things about S3 is that traffic to and from EC2 is free. Google lacks an equivalent to EC2 but it is possible that some apps running on AppEngine or even Google Predict will make use of Google Storage.

Though Google has its AppEngine cloud computing platform, it has previously lacked a storage solution to go with it

From day one, AppEngine always had its own BigTable style datastore with it's own API and GQL (Google Query Language). I haven't read into this new stuff to understand how it is different.

Well, this is just (fairly dumb) bulk storage. Not structured/semi-structured, but rather a place to stash data. BigTable is reasonably analogous to Amazon's SimpleDB, but until now Google lacked an S3 equivalent.

S3 and Google Storage are both key:blob stores. I don't understand why Google things they can charge more for theirs. Do they think it's faster? One of the nice things about S3 is that traffic to and from EC2 is free. Google lacks an equivalent to EC2 but it is possible that some apps running on AppEngine or even Google Predict will make use of Google Storage.

I was actually a bit surprised that Google isn't (apparently) offering free transfers to/from AppEngine. It seems an obvious tie-in, especially since it's what Amazon does.

S3 and Google Storage are both key:blob stores. I don't understand why Google things they can charge more for theirs. Do they think it's faster? One of the nice things about S3 is that traffic to and from EC2 is free. Google lacks an equivalent to EC2 but it is possible that some apps running on AppEngine or even Google Predict will make use of Google Storage.

I was actually a bit surprised that Google isn't (apparently) offering free transfers to/from AppEngine. It seems an obvious tie-in, especially since it's what Amazon does.

Yeah I was really surprised too, because Google owns more fiber than God, running all over North America, and they can afford to give away a wave to their users. I'm positive that internal network traffic costs Google less than it costs Amazon.

S3 and Google Storage are both key:blob stores. I don't understand why Google things they can charge more for theirs. Do they think it's faster? One of the nice things about S3 is that traffic to and from EC2 is free. Google lacks an equivalent to EC2 but it is possible that some apps running on AppEngine or even Google Predict will make use of Google Storage.

Faster (data-to-client) would be a distinct possibility. I could see that being a real selling point for the rest of the world that can't afford Akamai.

@metageek: The reason they have these cloud services in the first place is because their infrastructure is scaled to handle big spikes in Amazon.com traffic. Because these big spikes are relatively infrequent, most of the compute power sits idle most the time, thus they have turned it into a commodity with E3, S3, etc. I would bet 99.99% uptime means you get access to your services unless they're running at peak, and then you get bumped off until that's over. From that perspective, I can see why one would be worth charging so much more to them, because it actually requires there have guaranteed capacity over even their peak capacity.

I dumped Amazon EC2 in favor of Rackspace Cloud Servers and Cloud Files. I had been using EC2 for several integration projects (where the cloud VM extracts, transforms, and loads data between multiple sources and targets). It was nice in the sense that I could fire up the VM instance only when I needed the integration job to run. However, more frequently our client integrations are realtime or near-realtime, so there's almost no point where the instance could be shut down.

Rackspace's services have been great as a replacement. The management interface is great, can even use the iPhone app for some status and maintenance tasks. The API to interface with both the servers and the Cloud Files platform are very, very clean and simple. The performance seems to be about 50%-80% better than similarly tiered VM's on EC2, and the Rackspace options are half the cost.