Your team has competed its development, reviewed and tested the code on their dev systems and check it into the code repository. The build system has compiled the application, run the unit tests and whatever else it needs to validate the build. But the application fails after it is deployed to some downstream server. You dig and dig only to find out that there is a configuration difference between the development systems and the down stream systems.

Or this one?

You have completed the development of your application and suddenly your customers management decides to change the hosting provider.

Root Cause

For a traditional, non-cloud applications running on a server, delivery artifacts are typically a software package tailored for the target OS

The typical deployment strategy of packages involves copying the package archives (that contain binary executables, JARs, scripts, etc) to the host VM along with making any necessary configuration changes to the host VM and updating the DB if needed. All of this work is done using tools available on the target OS to install and manage dependency on other packages The target OS package management tool installs the application, gives file ownership to the proper users, ensures application startup on OS boot and shutdown on OS stop.

All of this means when your customer decides that they would like to deploy your application in a new IT data center, you will have to go into a several month development cycle just to update startup scripts and installation procedures. This also means that you will introduce support risk since there is probably no longer a one to one mapping between your dev/qa systems and the production system. This is even harder if the customer can not provide a replicated environment for testing or instructions to configure a VM with their customized installation. So the production installation may fail, even though it may function correctly in QA.

To get this work done using traditional VMs, you would need to have your application development team deliver an RPM containing their software and startup scripts. They would also need to communicate all of the information regarding network dependencies such as firewall rules. This would allow a VM to be created that contains any dependent software. Each application would be deployed into a VM. These would be large in size since VM instances do not share any common components and so may be too heavy to deploy to a developers laptop.

What is the solution?

What is needed in this situation is a deployment unit that is smaller than a VM but also provides app isolation. Something that will reduce the applications dependencies on OS packaging.

Development & QA will want to work directly with the deployment unit when doing its testing. This will ensure that the same test results are delivered at each step on the release path.

You will also want to automate the construction of this unit of deployment into your continuous integration tool so that in your build process you will create a single unit that encapsulates ll of the application dependencies, operating environment needs (port mapping), and startup requirements.

Docker is an Open-source project (Apache 2.0 License) that provides a software container in which linux applications run. This container provides all of the dependencies that the application needs but avoids the weight of a full VM by sharing the kernel with other containers. It also provides resource isolation (CPU, memory, I/O, network) and shares resources between running apps where possible (OS, bins/libs). They have much faster start times and far less physical disk storage requirements which can translate to higher densities per node.

The construction of Docker images can be integrated into your current build system allowing each application to be built and delivered as a Docker Container. This means that developers and testers will run the exact same image that is going to be deployed to production. Testers should never again hear “It is working on my system” from developers.

Here is a list of some of the Docker Advantages

Isolation

Filesystem: each container has a completely separate root filesystem; shared files can be “mounted” in from the Host OS

Resources: CPU and Memory can be allocated differently to each container

Network: each container has its own network namespace, each with virtual interface and IP

Common deployment unit

No worries about supporting different package managers or init mechanisms

Images can be stacked / chained together

Same container runs on developer’s laptop, in the CI environment, and in the production environment

There are many opportunities where Docker images can be used. I would love to hear how you are using them.

boot2docker – Remote Docker daemon

boot2docker is a lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers. It runs completely from RAM, weighs ~27MB and boots in ~5s.

boot2docker is required if you want to do any work with docker images on a Macintosh. This includes building images and running containers.

Installing Boot2Docker on Mac using homebrew

$ brew install boot2docker

If you are not a user of HomeBrew for package management, I highly recommend it. You can get more information on it and how to install it at : Homebrew

Start boot2docker

$ boot2docker init
$ boot2docker start
$ $(boot2docker shellinit)

“boot2docker init” creates a new VM. This only needs to be run once unless you delete your VM.

The last line “$(boot2docker shellinit)” sets the DOCKER_HOST environment variable for this shell.

SSH into the boot2docker VM

$ boot2docker ssh

On the MacOS, the Docker config file is located at: /etc/init.d/docker

Managing your Boot2Docker VM

There is a limited set of commands that can be used to manage you boot2docker vm. but by using the VirtualBox CLI, you can fine tune the configuration of it. If you prefer to use a graphical interface to configure the vm, you can use VirtualBox. Once boot2docker is up, you can start VirtualBox and see the boot2docker-vm listed there. Also, download for VirtualBox also includes the documentation for the CLI.

Handling the insecure registry error

Error: Invalid registry endpoint : Get : EOF. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry 168.84.250.205:5000 to the daemon’s arguments. In the case of HTTPS, if you have access to the registry’s CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/168.84.250.205:5000/ca.crt

Insecure connections to registries are not allowed (by default) starting with version 1.3.1 of docker. You may receive the error above when attempting to pull from an insecure private registry. To fix this issue …

Sync boot2docker

boot2docker host suffers from time drift while your OS is asleep. This issue manifests itself on the MacOS. I am not sure if there is an issue about Windows. I ran into this issue while compiling code on an image as it was being constructed. The build date of the application tended to lag further and further behind until I would restart boot2docker and then it would re-sync. What I needed was the ability to sync boot2docker with a time server every time a new image was being built.

To resync the boot2docker vm with a time server

$ /usr/local/bin/boot2docker ssh sudo ntpclient -s -h pool.ntp.org

Exposing your containers to the network

If you want to share container ports with other computers on your LAN, you will need to set up NAT adaptor based port forwarding.

On a running instance of boot2docker that is hosting a Tomcat server on port 8080, forward all incoming requests on port 8080 from the host OS to boot2docker

Docker allows you to package an application with all of its dependencies into a standardized unit for software development.

Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

Docker Images vs Docker Containers

Docker Images are the basis from which Docker containers are created. When you startup a Docker image, a docker container is created. I liken it to classes and objects. The image (class) represents all of the capabilities of the container (object) once it is instantiated but an image cannot do anything. Once a container is created, it can be started and stopped freely and it saves its state. You can create multiple instances of a particular image as long as you give them different names.

Working with Docker Images

Installing Docker using homebrew

$ brew install docker

If you are not a user of HomeBrew for package management, I highly recommend it. You can get more information on it and how to install it at : Homebrew

Open bash prompt in a container

This command creates a container from the specified image ($image_id), opens a bash shell into it and returns the container id (CONTAINERID)

List all images in your local repository

$ docker images

Remove all untaged images (registry cleanup)

$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

Working with containers

Open bash shell in a container

To launch a container, simply use the command docker run + the image name you would like to run + the command to run within the container. If the image doesn’t exist on your local machine, docker will attempt to fetch it from the public image registry.

Setting the $JAVA_HOME environment variable on the Mac OS can be a little confusing. I hope this clears things up.

Open either ~/.bash_profile or ~/.profile in your favorite editor. I am using TextMate and I prefer editing ~/.profile.

Add “export JAVA_HOME=$(/usr/libexec/java_home)” to the file and save it.

You can reload the current environment without logging out by typing “. ~/.profile”

What is going on here? The java_home man page lays it out pretty clearly.

“The java_home command returns a path suitable for setting the JAVA_HOME environment variable. It determines this path from the user’s enabled and preferred JVMs in the Java Preferences application. Additional constraints may be provided to filter the list of JVMs available. By default, if no constraints match the available list of JVMs, the default order is used. The path is printed to standard output.”

There are serveral options called out in the man page but the most important to you is probably “-v” & “-V”

“/usr/libexec/java_home -V” – Prints the matching list of JVMs and architectures to stderr.

“/usr/libexec/java_home -v” – Filters the returned JVMs by the major platform version in “JVMVersion” form. Example versions: “1.5+”, or “1.6*”.

So, if you need to change JAVA_HOME to an earlier version of Java (maybe you have a program that requires Java 5), add “export JAVA_HOME=$(/usr/libexec/java_home -v 1.5)” to your ~/.profile file.

This assumes that version 1.5 was returned by “/usr/libexec/java_home -V”

In my last 2 posts I discussed the importance of engaging in a postmortem at the end of your projects and promised to provide a template that can be followed when gathering feedback prior to the meeting and consolidating feedback during the meeting.

Templates like this have been created and posted all over the web so this is really just a collection of what I think is some of the “best of” details that should be gathered together to make a postmortem successful. I have customized it for my uses. Feel free to grab it and customize it for yours.

Project

Description

Project Name:

Client:

Project Manager:

Solutions Architect:

Start Date:

Completion Date:

Project Overview [Describe the project in detail.

Discuss the project charter

What was the project success criterion?

etc.

Performance

Key Accomplishments [List and describe key project accomplishments in the space provided below. Explain elements that worked well and why. Consider listing them in order of importance. Be specific.]

What were the effects of key problems areas (i.e. on budget, schedule, etc.)?

Technical challenges

Risk Management [List project risks that have been mitigated and those that are still outstanding and need to be managed.]

Project risks that have been mitigated:

Outstanding project risks that need to be managed:

Overall Project Assessment [Score/rank the overall project assessment according to the measures provided. A 10 indicates excellent, whereas a 1 indicates very poor.]

Criteria

Score

Performance against project goals/objectives

1 2 3 4 5 6 7 8 9 10

Performance against planned schedule

1 2 3 4 5 6 7 8 9 10

Performance against quality goals

1 2 3 4 5 6 7 8 9 10

Performance against planned budget

1 2 3 4 5 6 7 8 9 10

Adherence to scope

1 2 3 4 5 6 7 8 9 10

Project planning

1 2 3 4 5 6 7 8 9 10

Resource management

1 2 3 4 5 6 7 8 9 10

Project management

1 2 3 4 5 6 7 8 9 10

Development

1 2 3 4 5 6 7 8 9 10

Communication

1 2 3 4 5 6 7 8 9 10

Team cooperation

1 2 3 4 5 6 7 8 9 10

Project deliverable(s)

1 2 3 4 5 6 7 8 9 10

Additional Comments:

Other general comments about the project, project progress, etc.

Key Lessons Learned

Lessons Learned [Summarize and describe the key lessons and takeaways from the project. Be sure to include new processes or best practices that may have been developed as a result of this project and to discuss areas that could have been improved, as well as how (i.e. describe the problem and suggested solution for improvement).]

Post Project Tasks/Future Considerations [List and describe, in detail, all future considerations and work that needs to be done with respect to the project.]

Ongoing development and maintenance considerations

What actions have yet to be completed and who is responsible for them?

Is there anything still outstanding or that will take time to realize? (i.e. in some instances the full project deliverables will not be realized immediately)

In my last post, I wrote about post-mortems and how they require courage to perform well. In this post, I will focus on the need for optimism.

The most important aspect of the post-mortem is the final result. If nothing changes as a result of the meeting, it has been a waste of time. In fact if the project didn’t go well I would say that it was a painful waste of time. Why spend the time rehashing the mistakes if you are not going to impose any new processes that would prevent this from happening next time?

With this in mind, you should go into this meeting with a great sense of optimism. Optimism for the future. There is no reason to have a post-mortem if you don’t think things can or will get better.

To be optimistic we have to make sure that we cover all of the right bases. This means going over the successes as well as the failures. Everyone should come out of this meeting feeling good about themselves and having a plan for their areas of improvement. Covering the bases also means making sure that this is not an opportunity to punish the team members. I am talking to management here. No one is going to open up in the meeting and give their honest opinions if they think they will be punished later. Finally, you need to create a plan of action. This is the frosting on the cake. This is what helps everyone to leave the meeting feeling good and looking forward to the next project.

Can this be done? In my next post, I will layout a template for a postmortem meeting that you can use to achieve courageous postmortem meetings that are attended by optimistic (maybe even excited) individuals.