I spent quite some time digging around the web looking for some straightforward way of creating a CI/CD pipeline using AWS services running containerized applications. Ideally, I wanted to use the AWS CLI tool as much as possible, using the cli is much easier to manipulate and change on the fly ( IMHO ).

There are several good articles and examples out there, some using CloudFormation ( which I’ll never do … ugly ), and others with partial examples, but nothing I found was complete.

So here’s what I’ve come up with: A bash shell script that can be run from the command line, it creates a complete CI/CD pipeline from end-to-end with a containerized application using Docker, utilizing these AWS services:

Most of the above commands return json output which we’ll need to parse and collect specific attributes, mostly ID’s, to continue to the next set of commands. To parse the output, we’ll need to use the “jq” utility command. For example, from the first cli command:$> createccrepo=$( aws codecommit create-repository –repository-name $PROJECT –region $REGION 2>/dev/null )We need to extract the ssh git repo location in order to initialize and create the local git repo as our code starting point:$> repossh=$(echo $createccrepo | jq -r ‘.repositoryMetadata.cloneUrlSsh’ 2>/dev/null ) And later in the process:$> git clone $repossh

You’ll also notice throughout the code I use variables. Several variables will be required to run the script ( ie $PROJECT and $REGION ), those will be identified later.

Additional structure will be required to run the commands as well, along with several embedded templates to support the aws cli commands … a complete script will be provided at the end of this page which will include everything that is needed to run the process.

Before we get into the gruesome details, let’s take a look at what the pipeline looks like from a diagram:

As you can see above, the CI/CD pipeline isn’t all that complicated ( sort-of ). There’s only a few main components that make up the entire process along with a few small configuration files. Here’s the basic concept and general steps to create and launch a web site project:

The developer or engineer who creates the product ( in this case we’ll assume a web site of some flavor – Django, Angular, Node.js “Express”, Rails, etc. ) will containerize the product using any of the methods necessary to complete the task ( ie. docker, docker-compose, maven, etc. ).

Once the product is proven to be built in a repeating and consistent fashion, the CI/CD pipeline can be built.

The CI/CD build script can be launched with the “project name” as the first input argument.

When the pipeline completes, the git repo can be initialized and merged into the developers codebase.

When the developer is ready to push changes to the development or staging environment, a simple “git push” is all that is needed to trigger the pipeline.

Once the pipeline see’s any change to the code base in the specific branch, the code is sent up the pipeline, built and deployed into the ECS cluster ( Fargate ).

When completed, the websites URL will automatically update and display the changes.

More on that later. So lets tune our script to be a bit more complete, filling in some of the details to make all of this work.

We need to establish our environment along with a few static variables which will define our AWS account, VPC, and a few network specific settings. So we’ll add this set of variables at the beginning of our script:

Most of the above is self explanatory, just edit the variables with your specific environment ( IN_RED ) .

For the rest of the script, we’ll need add some templates and supporting files to submit to the various cli commands. I wont go into any real detail with these as it falls outside the scope of this article. So Instead, we’ll just get right to the punchline: Let’s show what the entire script looks like.

So go ahead and extract it, change the variables to match your environment, chmod to 700 ( or whatever you think is appropriate to run in your env ) and let me know how it goes. When you first run the program, it is designed to pause and display your settings like this:

So fill in the variables to match your local environment and then launch it to create your complete end-tp-end CICD pipeline. and please feel free to comment and let me know how it goes. I will be making adjustments to further automate the process, along with the cleanup script to remove the project when you finished testing.