Tag: docker-management

Why Smart Container Management is Key

For anyone working in IT, the excitement around containers has been hard to miss. According to RightScale, enterprise deployments of Docker over doubled in 2016 with 29% of organizations using the software versus just 14% in 2015 [1]. Even more impressive, fully 67% of organizations surveyed are either using Docker or plan to adopt it. While many of these efforts are early stage, separate research shows that over two thirds of organizations who try Docker report that it meets or exceeds expectations [2], and the average Docker deployment quintuples in size in just nine months. Clearly, Docker is here to stay. Read more

Docker containers make app development easier. But deploying them in production can be hard.

Software developers are typically focused on a single application, application stack or workload that they need to run on a specific infrastructure. In production, however, a diverse set of applications run on a variety of technology (e.g. Java, LAMP, etc.), which need to be deployed on heterogeneous infrastructure running on-premises, in the cloud or both. This gives rise to several challenges Read more

This post is the first in a series in which we’d like to share the story of how we implemented a container deployment workflow using Docker, Docker-Compose and Rancher. Instead of just giving you the polished retrospective, though, we want to walk you through the evolution of the pipeline from the beginning, highlighting the pain points and decisions that were made along the way. Thankfully, there are many great resources to help you set up a continuous integration and deployment workflow with Docker. This is not one of them! A simple deployment workflow is relatively easy to set up. But our own experience has been that building a deployment system is complicated mostly because the easy parts must be done alongside the legacy environment, with many dependencies, and while changing your dev team and ops organization to support the new processes. Hopefully, our experience of building our pipeline the hard way will help you with the hard parts of building yours.

In this first post, we’ll go back to the beginning and look at the initial workflow we developed using just Docker. In future posts, we’ll progress through the introduction of Docker-compose and eventually Rancher into our workflow.

To set the stage, the following events all took place at a Software-as-a-Service provider where we worked on a long-term services engagement. For the purpose of this post, we’ll call the company Acme Business Company, Inc., or ABC. This project started while ABC was in the early stages of migrating its mostly-Java micro-services stack from on-premise bare metal servers to Docker deployments running in Amazon Web Services (AWS). The goals of the project were not unique: lower lead times on features and better reliability of deployed services.

The plan to get there was to make software deployment look something like this:

Over the last few months our team at Rancher Labs has been adding support for Kubernetes within Rancher. We’ve been implementing Kubernetes in a way that takes advantage of Rancher’s platform orchestration, simple UI, access control, networking and storage capabilities to deliver simple to deploy Kubernetes clusters for managing applications. In our February meetup we introduced this new support, and discussed how these environments compare with our traditional Docker environments and help users understand when and how each can be used to deploy and manage container deployments.

This new functionality will be available in Rancher in two to three weeks in March of 2016. We’ve uploaded a recording of the meetup below, as well as posted the slides to Slideshare. Read more

So far in this series of articles we have looked at creating continuous integration pipelines using Jenkins and continuously deploying to integration environments. We also looked at using Rancher compose to run deployments as well as Route53 integration to do basic DNS management. Today we will cover production deployments strategies and also circle back to DNS management to cover how we can run multi-region and/or multi-data-center deployments with automatic fail-over. We also look at some rudimentary auto-scaling so that we can automatically respond to request surges and scale back when request rate drops again. If you’d like to read this entire series, we’ve made an eBook “Continuous Integration and Deployment with Docker and Rancher” available for download. Read more

Recently Rancher introduced the Rancher catalog, an awesome feature that enables Rancher users to one-click deploy common applications and complex services from catalog templates on your infrastructure, and Rancher will take care of creating and orchestrating the Docker containers for you.

Rancher catalog offers a wide variety of applications in its out of the box catalog, including glusterfs or elasticsearch, as well as supporting private catalogs. Today I am going to introduce a new catalog template I developed for deploying a MongoDB replicaset, and show you how I built it. Read more