Top Authors

RightScale Supports Amazon EC2 Europe

Our platform now supports Amazon EC2 in Europe! Several of our customers have already noticed it and are running servers there using RightScale. This brings the cloud offering in Europe practically up to par with what's available in the US. After operating production servers in the EU for about a month now, I must say that it's been pleasantly uneventful! It actually took me a few server launches for all this to sink in: In a previous life, I had to send employees on overseas trips to scout out hosting facilities before we could ship servers there. And then it cost a ton of "remote hands" to have them racked and wired to spec. Now it's all just a drop-down menu!

The big benefit of Amazon EC2 in Europe is the reduced latency to European users. It should help companies adhere to EU regulatory requirements for data storage and processing. For companies operating globally it also supports an additional level of redundancy and disaster recovery. The main differences between EC2 US and EU are the absence of SQS and SDB in the EU. I hope SQS in particular will be available soon since our many RightGrid users need it to operate their deployments. While the latency to the US isn't a huge deal for SQS, it sure would be nice to have all pieces of the puzzle within the same region.

The way this looks in the RightScale platform is deceptively simple. You can now place servers in different regions within the same RightScale deployment, and you can manage them as a unit. This means that configuration inputs can apply across regions, and that monitoring and alerting are in one place. Below is a screen shot of one of our own deployments where an EU server sits side by side with peer servers in several US availability zones.

The big surprise when Amazon announced the EU region was that they decided to offer what I would describe as a separate cloud, disconnected from EC2 US. As I mentioned in my previous blog post, this was the right decision because it isolates the regions from one another from a failure perspective. Before this, I always kept wondering how AWS would convince us that EC2 couldn't go down worldwide at the same time due to a software bug in the front-end API servers. Now the answer is pretty clear. That's a really good thing.

To help our users to operate across these two clouds we added some features to replicate images and server templates from one EC2 region to the other. If you have an AMI in the US and want to launch it in the EU, you can simply press the Replicate button and we'll make a copy of it in the EU:

The same applies to ServerTemplates, which you can replicate to the other region. This automatically attaches the right image, kernel, and ramdisk underneath. Having to replicate the images is something required by the EC2 architecture, and we've already replicated all our RightImages, so the majority of our customers don't need to deal with this at all. Replicating the ServerTemplates creates an additional level of duplication which we'll eliminate in the next release, making it even easier to operate in both regions.

Getting the EU support into all parts of our system took a little longer than we had hoped. The primary difficulty was that we hadn't upgraded our EC2 code base to our new multi-cloud structure. And to be frank, the way Amazon decided to separate the US and EU clouds didn't help. It's one thing to require the use of a different API front-end to access each region, it's another not to keep a global object name space, for example, instance ID i-123456 can exist in both the US and EU! But now all this code is refactored and we're off to the next set of features.

Since we're talking about name spaces, I might as well comment on an oddity that has crept into the AWS services. There are two different strategies within AWS for handling regions: S3 is handled globally while EC2 is split per region. If you look at S3, there is a global namespace for buckets (the top-level containers in S3). If I point you to the rightscale-test bucket, you can't tell from the name where it's located until you access it. And there's a somewhat elaborate DNS and redirect scheme to ensure that your access "bounces" to the correct region. As a result, our UI has a single "S3 browser." It wouldn't make sense to have an "S3 US browser" and an "S3 EU browser." For EC2 resources, however, everything is duplicated; there a list of EC2 EU instances and a separate list of EC2 US instances, same for EBS volumes, etc. We stitch this back together when you look at a deployment that can span clouds. The big question now is what we'll see for SQS and SDB. Which of these two schemes will they follow? Only time will tell.