Amazon fuses your storage system with its cloud

Amazon Web Services is rolling out a new feature called Storage Gateway that lets companies upload data to its cloud-storage services directly from their on-premise storage systems, the company said on Wednesday. AWS’s goals with Storage Gateway appear threefold: cloud backup, cloud bursting and, ultimately, primary cloud storage — all without having to worry about latency concerns.

The way AWS Storage Gateway works is by securely storing data as a snapshot image within S3, then porting that data to the AWS Elastic Block Storage service if desired. Once there, users can process that data using Amazon EC2 cloud computing instances. Storage Gateway keeps data on local gear while asynchronously uploading it to Amazon’s cloud. This lets companies leverage the cloud when they need it, but helps eliminate latency concerns that come with uploading large amounts of data to the cloud for backup, as well as with using local storage and cloud-based resources.

Gateways can be attached to application servers as standard iSCSI devices, and each gateway has a capacity of 12 volumes and 12 TB total. The service costs $125 per month per gateway, and snapshot pricing starts at 14 cents per gigabyte.

However, the coolest part about Storage Gateway might be yet to come. Although the current iteration requires companies to keep complete copies of their data locally, the service will soon enable an on-premise caching scenario in which frequently accessed data will remain on local storage attached to local servers, but the entire data set will reside only on Amazon’s cloud. Several vendors, including StorSimple, TwinStrata and Riverbed TechnologySystems — although no longer the hyped-up Cirtas Systems — are currently pushing this approach using physical appliances, but AWS’s could be more palatable to some because it doesn’t require bringing in a new hardware or software vendor.