IDG Contributor Network: What does container portability really mean?

Containers offer the promise of portability and agility: the ability to move your applications from a developer’s laptop to your internal datacenter, and out to different cloud providers with little trouble right? They offer the ability to spin up new, custom versions of your software to quickly meet contractual deadlines which were signed last minute, or maybe even provide your customers with self service. They start faster, and are easier to move around than virtual machines. Right?

That’s the goal, but portability and compatibility are not the same thing. Portability is a business problem, while compatibility is a technical problem. Portability can only be achieved by planning compatibility in different environments. Adopting containers alone provides no guarantee of application compatibility. Why would it? Containers are really just a fancy way of packaging applications and all their operating system dependencies.

Scott McCarty

A standard application definition is necessary to enable build and deployment automation across environments.

What does this all mean? It means that to really achieve portability, and hence agility in your business, you need to plan. Here is a quick set of recommendations to help ensure success:

1. Standard operating environment

This needs to include container hosts, container images, container engine, and container orchestration. All these moving parts need to be aligned and standardized. All these components need to be versioned and tested together. Upgrades need to be planned together as a unit because there are a lot of interdependencies. Infrastructure parity needs to be guaranteed in every environment where containers are built or run, including developer’s laptops, testing servers, and virtual machines. It is always recommended to use the same tested and certified components everywhere.

2. Standard application definition

There are several technology choices when it comes to application definitions. Docker Compose, Kubernetes Objects, OpenShift Templates, or Helm Charts. Do a bake-off and choose one application definition. Each has strengths and weaknesses. Don’t translate between different ones; this will only lead to problems when features in one aren’t supported or aren’t supported the same way in another.

For example, developers shouldn’t build with Docker Compose, then rebuild with Kubernetes in production. Although this is technically possible, it leads to two sets of experience, two sets of investment, two sets of bugs, two sets of learning, two sets of documentation, two sets of workarounds for things that don’t work right, and so on. It is best to choose one and then if needed, reevaluate later if you find there is something better—this technology is changing rapidly.

3. Data storage, configuration, and secrets

These things should be determined and provided by the environment, not embedded in the container images, nor in the application definitions.

For example, you do not want credit card information in a data store on the developer’s laptop, this should only be in production. You don’t want production database passwords embedded in container images which developers use on their laptops.

On the other hand, if you move between two cloud providers, you will need to move data and keep it synchronized. You can do this at three different levels—blocks, files, or transactions. This doesn’t change with containers, and most container platforms don’t handle this for you. Block storage replication is typically done with underlying technologies DRBD, SRDF, or Ceph. File replication is typically handled with Rsync, Gluster Geo Replication, or, in low latency environments, GFS2 or GPFS. At the transaction layer, this needs to be part of the data store or database and is completely different with each technology—with MySQL it’s Transaction Replication, with MongoDB it’s Geographically Redundant Replica Sets, with Oracle Database there are multiple options, with JBoss Datagrid it’s Cross-Datacenter Replication.

Guaranteeing application portability requires extensive planning. Containers offer the promise of portability, but only if compatibility is thought about at the lower levels (container image, host, engine, orchestration), higher levels (application definition), and with the management (synchronization, role based access, etc) of data, configuration, and secrets.

Scott McCarty

The application definition is shipped between environments, but configuration and data are determined by the environment

Start with an application that is straightforward. Standardize on a set of container hosts, images, an engine, an orchestrator, and an application definition. Figure out how to manage data, configuration, and secrets. When you succeed, expand to another workload. Not every workload will be easy. But when you nail these things, you will achieve agility and portability and lower the barrier to delivering applications faster, and moving them between cloud providers easier.

This article is published as part of the IDG Contributor Network. Want to Join?