In the previous post we covered deploying an app to Azure through the Azure web portal and from Visual Studio. In this post, we’ll show you how to deploy to Azure from PowerShell. This comes in really handy if you want to be able to deploy to right from your build server, and who doesn’t want to do that?

Why now?

So we have not really gotten into much detail about Azure, and our app is stupidly simple, why are we getting into mundane operational gold-plating like automating deployments from a build server?

Because it’s really important to automate your whole build/deploy pipeline as soon as possible. The later you automated it, the more time you are flushing down the toilet. Even if you don’t want to deploy automatically from your build server, if you don’t at least boil your whole deployment down to single one-click script file, you’re stealing from yourself.

When I started out with SportsCommander, I was building all the code locally in Visual Studio and then deploying through the Azure web portal (I know, caveman stuff right?). Anyhow, pretty soon I got everything built and versioned through a TeamCity build server, and even had the site being FTPed to our shared hosting test server (hello, LFC Hosting), but for production deployments to Azure I would still remote into the build server and upload the latest package from the hard drive to the Azure website. Part of this was that I wanted to be able to test everything in the test server before deploying to production, and part of this was that I wanted to make sure it didn’t get screwed up, but part of it was also the logical fallacy that I didn’t have time to sit down and spend the time to figure out how to get the Azure deployment working.

And I was wrong. Way wrong. Deploying to Azure manually doesn’t take too long, but it adds up. If it took me 15 minutes to remote into the server, browse to the Azure site, select the package, select the config, and yadda yadda yadda, it only takes a handful of times before you are bleeding whole hours. If you are deploying several times per week, this can get really expensive. Not only are you getting fewer fixes and features done, you aren’t even deploying the ones that you do have done, because you don’t have time to deploy and it’s a pain anyway. Plus, really the only reason we wanted to deploy to the test server first was to smoke test, because deploying again was such a pain that I didn’t want to have to do a whole second deployment to fix a line of code; but if I could fix that line of code and redeploy with one click, I don’t even need to waste time with the test server.

So I didn’t want to spend the time figuring out who to deploy to Azure automatically. Well I did, but it took me more than 5 minutes to Google it, find the right answer among the plethora of other answers, so it took a while to get done.

Hopefully you found this post in under 5 minutes of Googling so you don’t have any excuses.

Prerequisites

If you’ve been Googling around, you may have seen some posts about installing certificates. Don’t bother. This approach doesn’t require it, which is good, because that’s no fun.

Second, make sure you can run remote signed scripts in PowerShell. You only need to do this once, and if you have played around with PowerShell you’ve probably already done this. Open up PowerShell in Administator mode (Start Button->type powershell->CTRL+Shift+Enter). Then type:

Set-ExecutionPolicy RemoteSigned

and hit Enter. You will get a message along the lines of “OMG Scary Scary Bad Bad Are you Sure!?!?! This is Scary!”. Hit “Y” to continue.

Now comes the tricky part. There is a whole bunch of PowerShell commands and certificate stuff that can get confusing. Thankfully Scott Kirkland wrote a great blog post and even put a sample script up on GitHub. I had to make a few tweaks to it to get it working for me, so here goes.

Fire up PowerShell again (doesn’t need to be Administrator mode any more), browse to your solution directory, and run:

Get-AzurePublishSettingsFile

This will launch a browser window, prompt you to log into your Azure account, and then prompt you to download a file named something fun like “3-Month Free Trial-1-23-2013-credentials.publishsettings”. Take that file, move it to your solution directory, and name it something less fun like “Azure.publishsettings”. If you open that fella up, you’ll see something like:

Run that guy from the PowerShell command line, and you’re watch the bits fly. Yes, it will take several minutes to run.

A generic version of this can be found here: https://gist.github.com/4539567. Again, I borrowed from Scott Kirkland’s version, but his script assumed that your storage and cloud service were the same name, so I added a separate field for storage account name. Also to alleviate my insanity, I added a little more diagnostic logging.

Next

This was the post that I started out to write, before I decided to backfill with the more beginner stuff. From here, it’s going to be a little more ad-hoc.

Anyhow, the next post will probably be setting up your own DNS and SSL for your Azure site.

Azure Deployment Package

In order to deploy your website to Azure, you’ll need two things, a deployment package and a configuration file:

Deployment Package (.cspkg): This is a ZIP file that contains all of your compiled code, plus a bunch of metadata about your application

Configuration File (.cscfg): This is an environment-specific configuration file. We’ll get into this more later, but this lets you pull all of the environment-specific configuration away from your code which is definitely a good idea.

So go back into your solution that we build in an earlier post (or grab it from GitHub here). To deploy this project, you will need a MMDB.AzureSample.Azure.cspkg file. But if you search your hard drive, you won’t find one yet. To create this package, you’ll need to right click on your Azure project and select “Package”

This will create the package, and will even launch a Windows Explorer window with the location (which is helpful, because it can be a little tricky to find:

Deploying Through Azure Website

We want to create a new Cloud Services project, so let’s click Create An Item and drill down to Compute->Cloud Service –>Quick Create:

We’ll enter our URL, which will end with .cloudapp.net, but does not have to the the final name of your website.

We’ll also select a Region/Affinity Group, which is where your servers will be hosted. Select somewhere closest to where your biggest user base will be. The different areas in the US don’t make a huge difference, but US vs. Europe vs. Asia can have a big impact.

So then we click Create Cloud Service, and we have ourselves a cloud service:

Now we’ll go ahead and set it up. Click the little arrow next to the service name, then Upload A New Production Deployment, and start filling in the details:

Enter a name for your deployment. This is specific to this deployment, and the idea is that it could/should change on subsequent deployments. I usually try to put the release version number in there.

Also browse for your package and config file, and select Deploy even if one or more roles contain single instance (which is in fact the case with our simple little test app), and select Start Deployment:

Click the checkmark button, and then go get yourself a cup of coffee. This is going to take a few minutes.

Oh you’re back already? Go sneak out for a smoke, we have a few more minutes to go:

This is one of the downsides to Azure, these deployments can take a while, especially for new deployments. Down the line we’ll show you how to automate this so you can click a button and walk away for a while, but for now just keep watching.

Ok, so after about 5 minutes, it looks like everything is running. We can click the Instances tab to see more detail:

Back on the Dashboard screen, if you scroll down, you’ll see a whole bunch of useful stuff:

Including the Site URL. Try clicking that and let’s see what we get:

Hey, there’s our website! Nice.

Deploying Through Visual Studio

Now that you’re familiar with the Azure website, we can also deploy right from Visual Studio.

Let’s go back into Visual Studio, right click our project, and select Publish:

Now we’ll set up our subscription details. Under “Choose your subscription”, select “<Manage…>”, and then click the New button, and that will take you to the New Subscription screen:

We’ll need to create a management certificate, so under the dropdown select “<Create…>”, enter a name for the certificate, and click OK.

How that we have that, upload that certificate to the Azure portal web site. Click the link to copy the path and then the link to go to the Azure portal:

One you’re there, go all the way to the bottom, select Settings->Upload Management Certificate.

Select the certification (using the path in your clipboard) and click OK.

OK, now that we’ve uploaded our certificate, let’s go back to the New Subscription screen. Next it’s asking for our subscription ID, which is back in the Azure portal. Paste that in, give a name for your subscription, and click OK:

And now we’re all the way back to the publish screen. Now we’ll use that subscription click next, and …

…and now it’s asking to create an Azure storage account? Why?

The reason is that when you deploy right from the Azure portal website, it does everything all at once on the server. However, when you deploy from the Visual Studio (or from Powershell, which we’ll get into later), it first uploads the package to the storage account and then tells Azure to deploy the package from there.

Let’s enter the name of our new storage account and location (yes, it’s a good idea to use the same location as the Region/Affinity Group you entered above) and click OK

Now we have some nice default settings to deploy with, and so let’s click Publish:

It may ask you to replace the existing deployment, and that is OK:

Now this will run for a while, and will take about as long as the deployment from the website.

Once that’s done, click the Website URL, and check out your fancy new website:

What to be careful of

When deploying through the website, make sure you build and package each time before you deploy. If you are deploying from Visual Studio, it will automatically package everything for you. However, if you build your project but forget to package it, and then upload the package to the Azure website, you’ll be uploading and older version package. This is the type of thing that can cause a lot of time wasted trying to figure out why your new changes are not appearing.

Also, as a general rule, deploying right from your development environment is bad bad bad. All of the awful reasons are long enough for another blog post, but in short you really want to have an independent build server which is responsible for pulling the latest code from source control, building it, running any tests it can, and publishing it out to Azure.

Next

Next, we’ll cover another key part of Azure deployments, which is deploying via command line. While deploying from Visual Studio and the Azure portal is easy when you’re getting started, eventually you’re going to want to automate this from a build server.

One of the hardest problems to solve when setting up a deployment strategy is how to handle the web.configs and exe.configs. Each environment will have different settings, and so every time you deploy something some where you need to make that web.config look different.

The quick and dirty answer is to have a separate web.config for each environment. Then during a deployment we drop the prod/web.config or staging/web.config into the web directory, and your good to go. However, like a lot of problematic development strategies, this is really fast and easy to get going with, but it doesn’t age very well. What happens when your DEV->STAGING->PRODUCTION environments evolve into LOCAL->DEV->QA->INTEGRATION->STAGING->PRODUCTION? Or when you have machine-specific or farm-specific settings that change from one part of the production environment to another?

Most importantly, what happens when that web.config changes for a reason other than configuration? Then you have a whole bunch of web.configs to fix, and you’re going to put a typo in at least 2 of them, it’s guaranteed.

Let’s take a look at at VERY simple web.config, created from just a basic MVC 4 project:

That is the ONLY line that is going to change from environment to environment. Everything else there is not configuration, it’s code.

OK sure, it’s in a “configuration” file. Who cares. It is tied to your code, and it should change about as often as your code changes, not more. It certainly should not change between environments. My general rule is, “if any change to it should be checked into source control, it’s code.” The sooner you stop pretending that they are different in a way that gives you an enhanced level of configuration flexibility, the sooner you’ll be a happier person. Trust me.

The problem here is that the web.config is a confused little person. It does try to configure stuff, but is actually two types of stuff. As far as most people are concerned, it is for configuring their application, but the other 90% of it that they never touch and usually don’t understand is for configuring the underlying .NET and IIS guts, not your application directly. And once you’ve coded your application, that 90% should never change from environment to environment, unless your code is changing as well.

And of course that code does change over time. If you add a WCF web service proxy client to your application, it’s going to fill your web.config up with all sorts of jibber-jabber that you better not touch unless you know what you are doing. But deep inside there is the endpoint URL that DOES need to change from environment to environment.

Again, this is where the “have a web.config for every environment” really breaks down, because now you have to go through and update every one those web.configs to add in all that crazy WCF stuff. And try not to screw it up.

So What?

So what’s can we do about it? We have a few options:

One option is to put all of the configuration in the database. This can introduce a lot of issues, when you configure your application to point to database that configures your application, your run into all sorts of codependency issues that makes your systems environments really fragile. The only time I’ve seen this be a good idea is when you have really specific change control rules about not being able to touch configuration files on the server outside of an official deployment, but configuring settings in the database through an administration page would be allowed.

I think these two options are preferable:

Drop a brand new web.config on every deployment and have your deployment utility and and reconfigure it, either using web.config transformations, XSLT, or a basic XML parser.

Use the configSource attribute on your settings. This lets you put all of your connectionString or appSettings in different files, which is NOT updated from source control in every deployment. This way you can always drop the latest web.config without having to worry about reconfiguring it. (If you’re using Azure, this goes a step farther, by having a completely separate file for reserved for environment configuration, outside of your web application package)

Both of these options work well. The first option works better if you have only a few options or if you need to update something that does not support a configSource attribute, like a WCF endpoint. The second option works better if you have a whole list of settings and can consolidate them into the connectionStrings, appSettings, etc.

But either way, no matter what, ALWAYS drop a new web.config, and ALWAYS make sure you have a plan to treat YOUR configuration differently than the rest of the web.config.

Now we have a empty project, and add in a Home controller a view, we have ourselves an simple but working MVC web application.

Now Let’s Azurify This Fella

Now if we want to deploy this to Azure, we need to create an Azure project for it. Right-click on your project and select “Add Windows Azure Cloud Service Project”

That will add a bunch of Azure references to your MVC app, and will create a new wrapper project:

Now if you can still run the original web application the same as before, but you if you run the new Azure project, you’ll get this annoying, albeit informative error message:

Ok so let’s shut it all down and restart VS2012 in Administrator mode.

(Tip: if you have a VS2012 icon on your Windows toolbar, SHIFT-click it to start in Admin mode)

When we come back in Admin mode and run the Azure project, it’s going to kick up an Azure emulator:

And we get our Azure app, which looks much the same as a our existing app, but running on another port:

The idea here is to simulate what will actually happen when the application runs in Azure, which a little different than running in a regular IIS web application. There are different configuration approaches, and the Web/Worker Roles get fired up. This is very cool, especially when you are getting started or migrating an existing site, because it gives you a nice test environment without having to upload to Azure all the time.

However, the simulator does have it’s downsides. First, requiring Administrator mode is annoying. I forget to do this EVERY TIME, and so right when I’m about to debug the first time, I have to shut everything down and restart Visual Studio and reopen my solution in Admin mode. Not the end of the world, but an annoying bit of friction. Second, it is SLOW to start up the site in simulator; not unusably slow, but noticeably and annoyingly slow, so I guess it’s almost unusably slow.

To combat this, I try to make sure that my web application runs fine all the time as a regular .NET web application, and then I just test from there. Then before I release a new feature, I test it out in simulator mode to sanity check, but being able to run as a vanilla web application makes everything a lot faster.

Also, and this is important, it forces you to keep your web application functioning independent of Azure. Besides the obvious benefit of faster debuggability, it also ensures that your application has enough seams that if we had to move away from Azure, you can. I’ve gone on and on about how great Azure is, but it might not be the right thing for everyone, or might stop being the right thing in the future, and you want to have the option to go somewhere else, so you really don’t want Azure references burned in all over the place. Even if you stay with Azure, you might want to replace some of their features (like replacing Azure Table Storage with RavenDB, or replacing Azure Caching with Redis). We’ve used a few tricks for this in the past that I’ll get into in some later blog posts.

If I had a nickel for every time our deployment strategy for a new or different environment was to edit a few config files and then run some batch files and then edit some more config files, and then it goes down in a steaming pile of failure, I would buy a LOT of Sriracha.

OK, that’s all there is too it! Let’s do it again tomorrow. Make sure you don’t burn you don’t burn your fingers on this blistering fast development productivity.

I know this sounds absurd, but the reality is that for a lot of people, this really is their deployment methodology. The might have production deployments automated, but their lower environments (DEV/QA/etc) are full of manual steps. Or better yet, they have automated their lower environments because they deploy there every day, but their production deployments is manual because they only do it once per month.

And you know know what I’ve learned, the hard and maddeningly painful way? Manual processfails. Consistently. And more importantly, it can’t be avoided.

Storytime

A common scenario you see is a developer or an operations person (but of course never both at the same time, that would ruin the blame game) is charged with deploying an application. After many iterations, the deployment process has been clearly defined out as 17 manual steps. This has been done enough times that the whole process is fully documented, with a checklist, and the folks running the deployment have done it enough times that they could do it in their sleep.

The only problem is that in the last deployment, one of the files didn’t get copied. The time before that, the staging file was copied instead of the production file. And the time before that, they put a typo into the config.

Is the deployer an idiot? No, as a matter of fact, the reason that he or she was entrusted with such an important role was that he or she was the most experienced and disciplined person on the team and was intimately familiar with the workings of the entire system.

Were the instructions wrong? Nope, if the instructions were followed to the letter.

Was the process new? No again, the same people have been doing this for a year.

At this point, the managers are exasperated, because no matter how much effort we put into formalizing the process, no matter how much documentation and how many checklists, we’re still getting failures. It’s hard for the mangers to not assume that the deployers are morons, and the deployers are faced with the awful reality of going into every deployment knowing that it WILL be painful, and they WILL get blamed.

Note to management: Good people don’t stick around for this kind of abuse. Some people will put up with it. But trust me, you don’t want those people.

The lesson

The kick in the pants is, people are human. They make mistakes. A LOT Of mistakes. And when you jump down their throat on every mistake, they learn to stop making mistakes by not doing anything.

This leads us to Mooney’s Law Of Guaranteed Failure (TM):

In the software business, every manual process will suffer at least a 10% failure rate, no matter how smart the person executing the process. No amount of documentation or formalization will truly fix this, the only resolution is automation.

So the next time Jimmy screws up the production deployment, don’t yell at him (or sneer behind his back) “how hard is it to follow the 52-step 28-page instructions!” Just remember that it is virtually impossible.

Also, step back and look at your day to day development process. Almost everything you do during the day besides writing code is a manual process full of failure (coding is too, but that’s what you’re actually get getting paid for). Like:

When you are partially checking in some changes to source control but trying to leave other changes checked out

When you need to edit a web.config connection string every time you get latest or check in

When you are interactively merging branches

When you are doing any deployment that involves editing a config or running certain batch files in order or entering values into an MSI interface, or is anything more than “click the big red button”

When you are setting up a new server and creating user or editing folder permissions or creating MSMQ queues or setting up IIS virtual directories.

When you are copying your hours from Excel into the ridiculously fancy but still completely unusable timesheet website

When, instead of entering your hours into a timesheet website, you are emailing them to somebody

When you are trying to figure out which version of “FeatureRequirements_New_Latest_Latest.docx” is actually the “latest”

When you are updating deploying database changes by trying to remember which tables you added to your local database or which scripts have or have not been run against production yet

It’s actually easier to find these things than you think. The reason is, again, it is just about everything you do all day besides coding. It’s all waste. It’s all manual. And it’s all guaranteed to fail. Find a way to take that failure out of your hands and bath it in the white purifying light of automation. Sure it takes time, but with a little time investment, you’ll be amazed how much time you have when you are not wasting it with amazing stupid busywork and guaranteed failure all day.

Overview

This is the first in a series of blog posts on getting started with building .NET applications in Windows Azure. We’ve been a big fan of Azure for a lot of years, and we’ve used it for SportsCommander.com’s event registration site since the very beginning.

I started off writing a blog post on automatically deploying web applications to Azure from TeamCity, but I ended up with too many “this blog assumes…” statements, so I figured should take care of those assumptions first.

A hosting platform, sort of line Amazon EC2, because you can deploy to virtual machines that abstract away all of the physical configuration junk that I don’t want to care about, but even better because it also abstracts away the server configuration stuff that I also don’t want to care about, so I can just build code and ship it up there and watch run without having to care about RAID drives or network switches or load balancers or whether someone is logging into these servers and running Windows Update on them.

Azure has grown into a lot of things, but as far as I’m concern Azure primary product is a Platform-as-a-Service (Paas) offering called Cloud Services. Cloud Services lets you use combination of Web Roles and Worker Roles to run web applications and background services.

Glossary

These types of terms get thrown around a lot these days, so let’s defined them.

Before the cloud came in to overshadow our whole lives, we had the these options:

Nothing-as-a-Service: You went to Best Buy and bought a “server.” You’re running it under your desk. Your site goes down when your power goes out or someone kicks the plug out of the wall. Or when your residential internet provider changes your IP because you won’t shell out the money for a business account with static IPs. Then the hard drive fan dies and catches fire, your mom complains about the burning smell and tells you to get a real job.

Co-Location: This is a step up. You still bought the server and own it, but you brought it down the street to hosting company that takes care of it for you, but you are still responsible for the hardware and software, and when the hard drive dies you have to shlep down to the store to get a new one.

Dedicated Hosting: So you still have a single physical box, but you don’t own it, you rent it from the data center. This cost hundreds up to thousands per month, depending on how fancy you wanted to get. You are responsible for the software, but they take care of the hardware. When a network card dies, they swap it out for a new one.

Shared Hosting: Instead of renting a whole server, you just rent a few folders. This option is very popular for very small sites, and can cost as little as $5-$10/month. You have very little control over the enviornment though, and you’re fighting everyone else on that server for resources.

Virtual Hosting: A cross between Dedicated and Shared hosting. You get a whole machine, but it’s a virtual machine (VM) running on a physical machine with a bunch of other virtual machines. This is the ground work for Infrastructure as a service. You get a lot more control of the operating system, and supposedly you are not fighting with the other VMs for resources, but in reality there can always be some contention, especially for disk I/O. The cost is usually significantly less than dedicated hosting. You don’t care at all about the physical machines, because if one piece of physical hardware fails, you can be transferred to another physical machine.

In today’s brave new cloudy buzzword world, you have:

Infrastructure-as-a-service: This is basically Virtual Hosting, where you get a virtual machine and all of the physical machine info is abstracted away from you. You say you want a Windows 2008 Standard Server, and in a few minutes you have a VM running that OS. Amazon EC2 is the classic example of this.

Platform-as-a-Service: This is one level higher in the abstraction. It means that you write some code, and package it up in a certain way, give it some general hosting information like host name and number of instances, and then the hosting company takes it from there. Windows Azure is an example of this, along with Google App Engine.

Software-as-a-Service (SaaS): This means that someone is running some software that you depend on. Either you interact with it directly, or your software interacts with it. You don’t own or write or host any code yourself. The classic example of this is SalesForce.com.

So why is Azure and PaaS more awesome than the other options?

Because it let’s me focus on the stuff that really care about, which is building software. As long as I follow the rules for building Azure web applications, I don’t have to worry about any of the operations stuff that I’m really not an expert in, like have I applied the right Windows updates and is my application domain identity setup up correctly and how do I add machines to the load balancer and a whole lot of other stuff I don’t even know that I don’t know.

Some IT balk at this and insist that you should control your whole stack, down to the physical servers. That is a great goal once you get big enough to hire those folks, but when you are getting started in a business, your time is your most valuable asset, and you need a zero-entry ramp and you need to defer as much as possible to experts. If you are spending time running Windows Updates on your servers when you are the only developer and you could be coding, you are robbing your company blind.

Shared hosting platforms were close to solving this problem. As long as your website was just a website, and it’s small, you could host it on a shared hosting service and not worry about anything, until somebody else pegs the CPU or memory. Or if you need to go outside the box a little and run a persistent scheduled background process. Also, scaling across mulitple servers is pretty much out of the question, you are stuck with “less than one server” of capacity, and you can never go higher.

But after you grow out of shared hosting and move up to dedicated hosting or virtual hosting, it costs a whole lot more per month (like 5x or 10x), and the increased maintenance effort is even worse. It’s a pretty step cliff to jump off from shared to dedicated/virtual hosting.

Azure fills that gap more cleanly nicely. You are still just focusing on the application instead of the server, but you get a lot more power with features like Worker Roles and Azure Storage, and you can even expand out into full blown VMs if you really need it.

Ah ha, VMs! What about them? And Azure Websites?

By the time you’ve read this blog post, I’m sure the Azure team will have come out with 27 new features. Ever since the Scott Gu took over Azure, the rate at which Azure’s been releasing new features has gotten a little ridiculous. Two for of the more interesting features are Azure VMs and Azure Websites.

Azure VMs were late feature that it seems like the Azure team didn’t even really want to add. Every Azure web instance is actually a VM, so this lets you remote into the underlying machine like it was a regular Windows server, or even create new VMs by uploading the an existing VM image. This was introduced so that companies could have an easier migration path to Azure. If they app still needed some refactoring to fit cleanly into an Azure web or worker role, or it had dependencies on other systems that would not fit into an Azure application, it gives them a bridge to get there, instead of having to rewrite the whole world in one day. But to be clear, this was not introduced because it’s a good idea to run a bunch of VMs in Azure, because it misses out on the core abstraction and functionality that Azure offers. If you really just want VMs, just go to Amazon EC2, they are the experts.

Azure Website are a more recent feature (still in Beta) which mimics shared hosting in the Azure world. While the feature set is more involved than your run of the mill shared hosting platform, it does not nearly give you the power that Azure Cloud Services provides. They work best with simple or canned websites, like DotNetNuke, Orchard CMS, or WordPress. In fact, right now we’re testing out moving this blog and the MMDB Solutions website to Azure Websites to consolidate and simplify our infrastructure.

The End…?

In the coming blog posts, I’ll cover some more stuff like creating an account, setting up an Azure web application, deploying it, dealing with SQL Azure, and lots more. Stay tuned.

Yes, this is certainly the wrong way to do it. It’s not flexible, you have to change code every time the email content changes, and it’s just plan ugly.

On the other hand, much of the time especially (early in a project), this is just fine. Step 1 is admitting you have a problem, but step 0 is actually having a problem in the first place to admit to. If this works for you, run with it until it hurts.

I have a lot of code running this way in production right now, and it works swimmingly, because if there a content change I can code it, build it, and deploy it to production in 15 minutes. If your build/deploy change is small enough, there is virtually no difference between content changes and code changes.

Back to the real problem please

But let’s say you really do want to be more flexible, and you really do need to be able update the email content without rebuilding/redeploying the whole world.

That could work, and I’ve started down this path many times before, but looping messes you up. If you needed to include a list of order line items, how would you represent that in a template?

What else? If you are in 2003, the obvious answer is a to build an XSLT stylesheet. Serialize that data object as XML, jam it into your XSLT processor, and BAM you have nicely formatted HTML email content. Except writing those stylesheets are a nightmare. Maintaining them is even worse. If you don’t have interns that you hate, you’re going to be stuck with it.

So yes of course you could use XSLT. Or you could just shoot some heroin. Both will make you feel good very briefly in the beginning, but both will spiral our of control and turn your whole life into a mess. Honestly, I would not recommend either.

OK, so how about some MVC templatey type things?

The whole templating idea behind XSLT is actually a good idea, it’s just the execution that is painful. We have an object, we have a view that contains some markup and some presentation-specific logic, we put them all into a view engine blender and get back some silky smooth content.

If you were in ASP.NET MVC web application, you could use the Razor view engine (or WebForms view engine, if you’re into that kinda thing) to run the object through the view engine and get some HTML, but that plumbing is a little tricky. Also, what if you are not in an MVC web app, or any web app at all? If you are looking to offload work from your website to background processes, moving all of your email sending to a background Window service is a great start, but it’s tough to extract out and pull in that Razory goodness.