In order to run .NET web applications on Amazon Web Services EC2 instances in an Auto Scaling Group behind an Elastic Load Balancer, a process needs to be set up that will deploy your application each time a new EC2 instance goes online. The Octopus Deploy server is a neat tool that can be used for deploying applications to multiple environments. With a bit of setup, you can make it work very reliably with AWS EC2 instances.

Your web application will be running in a VPC. Assuming that you have (and you should have) multiple VPCs for your multiple environments, you should ideally place your Octopus server in a separate “hub” VPC as per the illustration below. In the future, you will be able to easily add new environments or applications to this setup. Your Octopus server should be set up in a private subnet that ideally is only accessible from trusted locations.

To allow communication between your Octopus server and application servers you should use VPC Peering. You also need to add a route to your route tables associated with the subnets where your application servers run. The destination of that route should be the CIDR of the subnet where your Octopus Server is and the target should be your peering connection. Similarly, you should add routes to the route table associated with your Octopus Server subnet. Your application subnets CIRDs would be the destination for that routes and your peering connection would be the target.

EC2 instances will have to be registered with Octopus server when they go online and deregistered when they are terminated. The full process is presented in the illustration below:

When a new instance is added to the load balancer, an Octopus tentacle needs to be installed on the instance. Once the tentacle is on the instance it has to be registered with the Octopus server. It will be given its Octopus machine name. We need to tag the EC2 instance with the Octopus machine name so that when the instance is terminated, we know what tentacle should be deregistered from the Octopus server. Finally we need to deploy the latest version of the web application to the new instance. All this can be accomplished by a user data script (written in power shell) passed to the EC2 instances. This script will be executed when a new instance is launched.

When an EC2 instance is terminated we need to read the corresponding Octopus machine name and make a request to the Octopus server to deregister the tentacle. This can be accomplished by a lambda function triggered by Cloud Watch event.

4 Comments

Andrew Hodgson
on 27/09/2016 at 1:17 pm

Hi,

Really good article. I tried doing something similar at the beginning of the year, one thing I ran into was creating a certificate using the command line, as the userdata script was running as a user which didn’t have access to the certificate store.

If I created a scheduled task to run with a specific user at reboot, I got it working.

Hi Andrew,
I don’t the use user data script to generate certificates. I have a step in Octopus Deploy that generates the certificate (if it does not exist) and then passes its thumbprint to the next deployment step.

Good article. How we can scope the config changes in octopus for EC2 instances. Suppose I have 5 instances under one Auto scale group. How can I have different value of one variable on all five instances.

However, in an autoscaling group machines normally will be created and destroyed depending on the demand. Perhaps you could create and set variables dynamically using the ec2 user data script rather than scoping them in Octopus?