The "cloud" is one of those things that I totally get and totally intellectualize, but it still consistently blows me away. And I work on a cloud, too, which is a little ironic that I should be impressed.

I guess part of it is historical context. Today's engineers get mad if a deployment takes 10 minutes or if a scale-out operation has them waiting five. I used to have multi-hour builds and a scale out operation involved a drive over to PC Micro Center. Worse yet, having a Cisco engineer fly in to configure a load balancer. Certainly engineers in the generation before mine could lose hours with a single punch card mistake.

It's the power that impresses me.

And I don't mean CPU power, I mean the power to build, to create, to achieve, in minutes, globally. My that's a lot of comma faults.

Someone told me once that the average middle class person is more powerful than a 15th century king. You eat on a regular basis, can fly across the country in a few hours, you have antibiotics and probably won't die from a scratch.

Cloud power is that. Here's what I did last weekend that blew me away.

I just took a website, bought a wildcard SSL cert, deployed to Asia, Europe, and US, and geo-load-balanced the secure traffic in 45 min. O_O

Scaling an Azure Website globally in minutes, plus adding SSL

I'm working on a little startup with my friend Greg, and I recently deploy our backend service to a small Azure website in "North Central US." I bought a domain name for $8 and setup a CNAME to point to this new Azure website. Setting up custom DNS takes just minutes of course.

At this point, I've got three web sites in three locations but they aren't associated together in any way.

I also added a "Location" configuration name/value pair for each website so I could put the location at the bottom of the site to confirm when global load balancing is working just by pulling it out like this:

location = ConfigurationManager.AppSettings["Location"];

I could also potentially glean my location by exploring the Environment variables like WEBSITE_SITE_NAME for my application name, which I made match my site's location.

Now I bring these all together by setting up a Traffic Manager in Azure.

I change my DNS CNAME to point to the Traffic Manager, NOT the original website. Then I make sure the traffic manager knows about each of the Azure Website endpoints.

Then I make sure that my main CNAME is setup in my Azure Website, along with the Traffic Manager domain. Here's my DNSimple record:

And here's my Azure website configuration:

Important Note: You may be thinking, hang on, I though there was already load balancing built in to Azure Websites? It's important to remember that there's the load balancing that selects which data center, and there's the load balancing that selects an actual web server within a data center.Also, you can choose between straight round-robin, failover (sites between datacenters), or Performance, when you have sites in geographic locations and you want the "closest" one to the user. That's what I chose. It's all automatic, which is nice.

Since the Traffic Manager is just going to resolve to a specific endpoint and all my endpoints already have a wildcard SSL, it all literally just works.

Sponsor: Big thanks to Aspose for sponsoring the blog feed this week. Aspose.Total for .NET has all the APIs you need to create, manipulate and convert Microsoft Office documents and a host of other file formats in your applications. Curious? Start a free trial today.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

You didn't quite touch on this point, and its one I question I know some Ops folks may have:

You installed the SSL cert in central US to start, did you also install it in the other regions or were those different certs? At the end, Did you serve the certs from the load-balanced servers or did you reconfigure the load balancer to serve the SSL cert?

Scott - Question: What about data availability in the other locations when using something like Azure Table Storage (not unique to ATS, but using as example)? Initially the site is in North Central, along with the storage account and table storage data. Once you scale it out to Europe and SE Asia, is there a way to also have the data available in those locations? If not, there seem to be at least two concerns that impact performance and cost:

would there not be increased latency and reduced performance accessing and querying table storage data from Europe/ Asia/ whereever and North Central datacenters?

If there is no way to have the data within the serving DC, wouldn't charges now apply for data transfer between data centers, specifically for every request out of Europe/ Asia and then cost for sending the data out of the north central DC.

I am hoping you or someone is able to point out a solution for the above issues or better still that I am wrong or confused because needing to access data is not an uncommon scenario :)

BB

Tuesday, May 06, 2014 8:49:04 AM UTC

Just to ensure that they are all in sync, couldn't you have used David Ebbo's Site-Extension to replicate the site to the different other sites (although this may be expensive in the long run) or just let them fetch from the same git repo :)

//Dennis

Dennis

Tuesday, May 06, 2014 11:14:54 AM UTC

Scott, how would you handle the data side of things in this scenario, e.g. I'm assuming you are using Azure SQL Server or something? I can't quite fathom how the data replication and access should work in a geo load balance environment.

Hi, I have the same q as Beyers, its great to scale the websites across the globe but what if the data is dependent and must be in sync? Its too much of a performance hit to go across diff data centers for sqlazure. Be great to get your thoughts on this.

Great article btw.

Isuru

isuru

Tuesday, May 06, 2014 3:08:26 PM UTC

Hi Scott,

I have the same question as BB and Beyers Cronje.

If I have a SQL Azure on the same data center where the original site is, what would be the best approach to replicate to the other two? I don't think that having the server on Asia query the server on NorthCentral US is going to be that good for performance

My questions are:When should I use one or the other?Do I need to use them together, or is there a case where I should use websites with worker roles for example?

If I put all of the updates in the webjobs/worker roles, and I have one storage for each website, should I use the secondary storage connection string or should I replicate the data between the storages manually?

Luiz Bicalho

Tuesday, May 06, 2014 8:18:33 PM UTC

Azure WebSites is super sweet.

As for deploying to multiple sites - you could, instead of pushing straight from your local git repo, setup deployment from github/bitbucket(/dropbox?), so that you will only push once, and have webhooks take care of publishing the newest and sweetest code to all locations!

Scott, any reason why you didn't use Kudu/SCM extensions to globally sync the code? I learned about it at build 14 and leverage the heck out of it. I've found that it reduces the possibility of some "oops" errors when doing manual deployment to multiple nodes.

Chris b!

Friday, May 09, 2014 3:10:38 PM UTC

@Scott - any ideas/ insights regarding the questions regarding data (table storage & other) being synced in the other data centers? The increased cost for data leaving location(s) and reduced speed in accessing across DCs appears to be a problem with global scaling.

BB

Saturday, May 10, 2014 3:47:56 AM UTC

Like the guys above, I would love to hear your thoughts RE the persistence layer handling. In my case horizontal partitioning the dbs and having multiple instances in each of the regions works well for me performance and scalability wise (for my specific use case). If it didn't I would probably run a distributed cache instance (memcached my pref) in each region and pre-emptive populate my queries and domain objects out of process from a single (region wise) SQL server cluster. Admittedly write executions would still be delayed, however I prioritize the importance of read performance (searches, web page views) over data entry - where people typically accept(or at least more so of) a slightly enlarged delay. And again I'd pump messages and run out of process any heavy db related activity where permitting. Please show me the light Scott (kreloses on a bus from Malaysia to singapore - saving his sanity with your blog).

Just curious what you're using on the backend? SQL Database? Or are you running your own SQL Server VMs? Or, are you using something NoSQL like Mongo?

Just curious what route you went for persistence.

Adam Anderly

Sunday, May 18, 2014 10:56:44 AM UTC

Great article showing how simple it is to scale out across data centres, but as others have said surely the tricky bit is the state management, in my case primarily sql azure. One assumes you either sync the data continuously or rely on a single instance but incur bandwidth expense and a potential bottleneck. I guess the best solutions will come down to individual requirements (performance v availability v cost) and on existing application architectures.

I agree BB - Scott can you please answer his question. Global Traffic manager is useless if the back-end data is required to keep synchronized.

My assumption is that I need employ some method data synchronizing on the back-end or try to use front-end caching solutions to keep as much data on the edge and possibly leave the profile/order data at one location and leave the fluff up front.

agreed - how are storage and sql azure handled in this ? I have a single website which accesses a single db and single storage account and I just want a list of regions with checkboxes and a big fat 'activate 600ms page loads around planet earth now' button. You could always add other planets later.

Digiface

Thursday, June 12, 2014 3:44:52 PM UTC

options I see:

1) Create a website and sql db for each region, then create a sync group for the sql dbs, then add a cdn that uses your storage account, then do the traffic manager stuff above.

2) As above but leave as single db and create VNETS between the regions.

In both cases could also use new redis cache on top.

In our situation it looks like there may be some room for reducing costs by choosing smaller website sizes and locally redundant sql db (in the case of using multiple dbs + sync group) now that traffic will be load balanced ?

Or pay for the tech support package. BUT a follow up post on the data side of things would obviously be well received.

Digiface

Friday, July 04, 2014 4:38:36 PM UTC

You never say how you generated the original CSR or where you got the private .key file.