Our first experiment with AWS was for the 2012 launch of our community platform, ‘Spaces. We began using the Heroku Platform as a Service (PaaS) – with files stored in Amazon’s Simple Storage Service (S3). S3 stores the files that community members upload: PDF documents, images, etc. The cost is astounding – a whole 3.3 cents per GB per month! (Sydney pricing at 5/3/2015).

In those days AWS was yet to open Australian data centres. Our files were stored in US alongside the Heroku service (itself hosted on AWS).

The AWS Storage Gateway

2012 brought with it a few exciting moves for AWS + Newington. Two key developments:

On campus we deploy an VM image that manages the gateway in either ‘gateway-cached’ or ‘gateway-stored’ modes, connecting to the remote volume and presenting it as a CIFS share.

We just pay for what we use.

Our initial use case was to create a ‘learning media’ drive – a CIFS share larger than the spare SAN storage we had available for the ‘write once; seldom read’ use we often have with campus videos and project work.

Outgrowing Heroku

Platform as a Service with Heroku was amazingly easy. Our Spaces app (Ruby on Rails + PostgreSQL). What we rapidly discovered, however, is that it was getting very expensive to keep scaling. We had growing expertise working with Amazon’s tools. In 2013 we steadily transitioned the service to Amazon. I’d love to do a full write-up, but in lieu of that, the quick summary of the service is:

Postgres with with automatic failover and snapshots on Relational Database Service (RDS)

Load balancing and other magic from the Elastic Load Balancer (ELB)

By late 2013 however it was clear that what we were seeing with AWS had the opportunity to be far more than a side project. With an appropriate strategy it looked like we could realistically speak of a time when the College would run few if any critical service on site. It was time to get down and dirty and do some planning.