Sunday, May 23, 2010

Boto and Google Storage

You probably noticed, in the blitz of announcements from the recent I/O conference that Google now has a storage service very similar to Amazon's S3 service. The Google Storage (GS) service provides a REST API that is compatible with many existing tools and libraries.

In addition to the API, Google also announced some tools to make it easier for people to get started using the Google Storage service. The main tool is called gsutil and it provides a command line interface to both Google Storage and S3. It allows you to reference files in GS or S3 or even on your file system using URL-style identifiers. You can then use these identifiers to copy content to/from the storage services and your local file system, between locations within a storage service or even between the services. Cool!

What was even cooler to me personally was that gsutil leverages boto for API-level communication with S3 and GS. In addition, Google engineers have extended boto with a higher-level abstraction of storage services that implements the URL-style identifiers. The command line tools are then built on top of this layer.

As an open source developer, it is very satisfying when other developers use your code to do something interesting and this is certainly no exception. In addition, I want to thank Mike Schwartz from Google for reaching out to me prior to the Google Storage session and giving me a heads up on what they were going to announce. Since that time Mike and I have been collaborating to try to figure out the best way to support the use of boto in the Google Storage utilities. For example, the storage abstraction layer developed by Google to extend boto is generally useful and could be extended to other storage services.

In summary, I view this as a very positive step in the boto project. I look forward to working with Google to make boto more useful for them and for the community of boto users. And as always, feedback from the boto community is not only welcome but essential.