6) Orchestrate a target group for the load balancer, complete with the targets, path rule and listener

First thing is first. You can set these in your runtime so these variables stay defined throughout the examples. It should go without saying, but sometimes it needs to be said - replace the values below with values relevant to your actual account...

# Your AWS account number
ACCOUNT_NUM="123456789012"
# Region you are doing this in
REGION="us-west-2"
# The ECR base repository, containing your account num and region
ECR_REPOSITORY="${ACCOUNT_NUM}.dkr.ecr.${REGION}.amazonaws.com"
# The profile used to log in (see your ~/.aws/credentials file)
ECS_CRED_PROFILE="ecs-profile"
# VPC instance ID (whatever VPC your ECS optimized EC2 instance is attached to)
ECS_INSTANCE_VPC_ID="vpc-123xyz0987"
# Subnets on the VPC (choose 2 or more)
ECS_INSTANCE_SUBNET_ONE="subnet-123a987x"
ECS_INSTANCE_SUBNET_TWO="subnet-987x123a"
# The cluster's name
ECS_CLUSTER_NAME="my-ecs-cluster"
# Your app load balancer name
ELBV2_LOAD_BALANCER_NAME="my-ecs-app-load-balancer"
# You will be able to set the ARN after the load balancer is created
ELBV2_LOAD_BALANCER_ARN="arn:aws:elasticloadbalancing:${REGION}:${ACCOUNT_NUM}:loadbalancer/app/load-balancer-ecs/123abc456def789"

Create the cluster and load balancer first - this will be a one-time gig

Login to your ECR repository. This login should carry you through the steps if done within a reasonable time, but if it expires, you'll need to run the command again. This is just one of many creative ways to do this, as discussed here: https://github.com/aws/aws-cli/issues/2875

You should include the base URL. I believe Docker otherwise interprets this as a push to the official Docker library. Basically, you would see something in the next step that looks like

"The push refers to repository [docker.io/library/my-projects/my-sample-image]"

Besides the ECR base repository being in image name, this is based on the example where I namespaced the project repository. Whatever you ended up calling your image/repository, this should match.

Did you notice I didn't use the full image ID hash? I used a shorter hash - the hash may be the least unique characters required to match, so long as the shortened hash qualifies to one ID. If I had another image ID of say, 1cbxyz123, I would have had to try identifying the above has as "1ca"

Finally, I could have also excluded the tag, "latest." This tag is set by default when no tag is specified. You can use other tags at your will, such as v0.1 or whatever has significance for your project.

Now that we created the repo and tagged the image locally, let's push it on up.

(Depending on the size of your image and your internet connection, this can take a while)

This next step is a little messy, but bear with me here. We're going to create a task definition, which essentially orchistrates the containers' envrionments. This is a very similar concept to Docker compose, the key differences being that: task definitions are JSON formatted rather than a YML document, and these task definitions are built for each environment rather than templated with environment variables. I might explain this somewhere else another time, but for now, let's register the task definition.

First, create a file called my-sample-task-definition.json. Here is a basic example of what goes into that file:

In this example, the file is located /usr/share/ecs-task-definitions/my-sample-task-definition.json

Once your file is saved, register it. If you don't mind the ugliness, you can also put that json string into the argument directly (omitting the tabs and newlines). I imagine this would be a good way to automate this and use variables in a script. But for this tutorial, we'll just refer to the file:

Now it's time to create the target group for the app load balancer. The naming scheme of some of these entities can be a little confusing, because the "load balancer" is more like a "proxy." We're going to setup a listener and a path-based rule that should affiliate with the target group. Now the target group will have a single target within it, per the flow of this example/tutorial. The target group can contain more than one targets, and in that regard, the target group effectively would be the "load balancer" as it directs traffic to the first healthy target in the mix. If one container is down (fails a health check), the application load balancer is not aware of this as the target group is. The load balancer will continue to redirect traffic, based on the path rule, to the target group. The target group will determine the target based on the health of each of the targets.

Here are some helpful diagrams of this (I stole the images from here):

So, it's important to understand the details of the order of these next commands. Creating a target group will allow us to create a path listener and rule. The listener and rule require the ARN of the new target group, so we need to parse the JSON response from the CLI command. My approach to this was:

Before the first pipe, the response was a multi-lined JSON output indicating the success of the newly created target group. The grep TargetGroupArn isolates the line of that json where the new ARN was output in that response. In the next segment, the awk command gave us the second column, where the JSON value was. Finally, this bit was trimmed of quotes and commas in the last piped segment. The entire command was contained as a variable so the pure ARN can be used in the next sequence of commands.

Perhaps there is a better way to extract that - PLEASE hit me up (reply to this blog) and provide a cleaner solution if you come accross one =)

This is essentially the same principle is applied when creating the load balancer's listener. We want to wrap that in a variable so we can pass that ARN into the functionality that creates the "rule." Notice that in this command, we're using the last ARN wrapping variable for the listener's default actions, then wrapping all of that into yet another variable called ARN_FROM_CREATING_LISTENER.