Tag Archives: microservices

Interesting discussions happen when you hang out with straight-talking Paul Czarkowski. There’s a long chain of circumstance that lead us from an Interop panel together at Barcelona (video) to bemoaning Ansible and Docker integration early one Sunday morning outside a gate in IAD.

What started as a rant about czray ways people find of injecting configuration into containers (we seemed to think file mounting configs was “least horrific”) turned into an discussion about how to retro-fit application registry features (like consul or etcd) into legacy applications.

Ansible Inventory is basically a static registry service.

While we both acknowledge that Ansible inventory is distinctly not a registry service, the idea is a useful way to help explain the interaction between registry and configuration. The most basic goal of a registry (there are others!) is to have system components be able to find and integrate with other system components. In that sense, the inventory creates allows operators to pre-wire this information in advance in a functional way.

The utility quickly falls apart because it’s difficult to create re-runable Ansible (people can barely pronounce idempotent as it is) that could handle incremental updates. Also, a registry provides many other important functions like service health and basic cross node storage that are import.

It may not be perfect, but I thought it was @pczarkowski insight worth passing on. What do you think?

Steven Spector and I talked about “Hybrid DevOps” as a concept. Our discussion led to a ‘there’s a picture for that!’ moment that often helped clarify the concept. We believe that this concept, like Rugged DevOps, is additive to existing DevOps thinking and culture. It’s about expanding our thinking to include orchestration and composability.

￼2016 is the year we break down the monoliths. We’ve spent a lot of time talking about monolithic applications and microservices; however, there’s an equally deep challenge in ops automation.

Anti-monolith composability means making our automation into function blocks that can be chained together by orchestration.

What is going wrong? We’re building fragile tightly coupled automation.

Most of the automation scripts that I’ve worked with become very long interconnected sequences well beyond the actual application that they are trying to install. For example, Kubernetes needs etcd as a datastore. The current model is to include the etcd install in the install script. The same is true for SDN install/configuation and post-install test and dashboard UIs. The simple “install Kubernetes” quickly explodes into a kitchen sink of related adjacent components.

Those installs quickly become fragile and bloated. Even worse, they have hidden dependencies. What happens when etcd changes. Now, we’ve got to track down all the references to it burried in etcd based applications. Further, we don’t get the benefits of etcd deployment improvements like secure or scale configuration.

What can we do about it? Resist the urge to create vertical silos.

It’s temping and fast to create automation that works in a very prescriptive way for a single platform, operating system and tool chain. The work of creating abstractions between configuration steps seems like a lot of overhead. Even if you create those boundaries or reuse upstream automation, you’re likely to be vulnerable to changes within that component. All these concerns drive operators to walk away from working collaboratively with each other and with developers.

Giving up on collaborative Ops hurts us all and makes it impossible to engineer excellent operational tools.

I expect 2016 to be a confusing year for everyone in IT. For 2015, I predicted that new uses for containers are going to upset cloud’s apple cart; however, the replacement paradigm is not clear yet. Consequently, I’m doing a prognostication mix and match: five predictions and seven items on a “container technology watch list.”

TL;DR:In 2016, Hybrid IT arrives on Containers’ wings.

Considering my expectations below, I think it’s time to accept that all IT is heterogeneous and stop trying to box everything into a mono-cloud. Accepting hybrid as current state unblocks many IT decisions that are waiting for things to settle down.

Here’s the memo: “Stop waiting. It’s not going to converge.”

2016 Predictions

Container Adoption Seen As Two Stages: We will finally accept that Containers have strength for both infrastructure (first stage adoption) and application life-cycle (second stage adoption) transformation. Stage one offers value so we will start talking about legacy migration into containers without shaming teams that are not also rewriting apps as immutable microservice unicorns.

OpenStack continues to bump and grow. Adoption is up and open alternatives are disappearing. For dedicated/private IaaS, OpenStack will continue to gain in 2016 for basic VM management. Both competitive and internal pressures continue to threaten the project but I believe they will not emerge in 2016. Here’s my complete OpenStack 2016 post?

Amazon, GCE and Azure make everything else questionable. These services are so deep and rich that I’d question anyone who is not using them. At least one of them simply have to be part of everyone’s IT strategy for financial, talent and technical reasons.

Cloud API becomes irrelevant. Cloud API is so 2011! There are now so many reasonable clients to abstract various Infrastructures that Cloud APIs are less relevant. Capability, interoperability and consistency remain critical factors, but the APIs themselves are not interesting.

2016 Container Tech Watch List

I’m planning posts about all these key container ecosystems for 2016. I think they are all significant contributors to the emerging application life-cycle paradigm.

Service Containers (& VMs): There’s an emerging pattern of infrastructure managed containers that provide critical host services like networking, logging, and monitoring. I believe this pattern will provide significant value and generate it’s own ecosystem.

Networking & Storage Services: Gaps in networking and storage for containers need to get solved in a consistent way. Expect a lot of thrash and innovation here.

Container Orchestration Services: This is the current battleground for container mind share. Kubernetes, Mesos and Docker Swarm get headlines but there are other interesting alternatives.

Containers on Metal: Removing the virtualization layer reduces complexity, overhead and cost. Container workloads are good choices to re-purpose older servers that have too little CPU or RAM to serve as VM hosts. Who can say no to free infrastructure?! While an obvious win to many, we’ll need to make progress on standardized scale and upgrade operations first.

Immutable Infrastructure: Even as this term wins the “most confusing” concept in cloud award, it is an important one for container designers to understand. The unfortunate naming paradox is that immutable infrastructure drives disciplines that allow fast turnover, better security and more dynamic management.

Microservices: The latest generation of service oriented architecture (SOA) benefits from a new class of distribute service registration platforms (etcd and consul) that bring new life into SOA.

Paywall Registries: The important of container registries is easy to overlook because they seem to be version 2.0 of package caches; however, container layering makes these services much more dynamic and central than many realize. (more? Bernard Golden and I already posted about this)

What two items did not make the 2016 cut? 1) Special purpose container-focused operating systems like CoreOS or RancherOS. While interesting, I don’t think these deployment technologies have architectural level influence. 2) Container Security via VMs. I’m seeing patterns where containers may actually be more secure than VMs. This is FUD created by people with a vested interest in virtualization.

Did I miss something? I’d love to know what you think I got right or wrong!

Progress and investment have been substantial and, happily, organic. Like many platforms, it’s success relies on a reasonable balance between strong opinions about “right” patterns and enough flexibility to accommodate exceptions.

From a well patterned foundation, development teams find acceleration. This seems to be helping CloudFoundry win some high-profile enterprise adopters.

The interesting challenge ahead of the project comes from building more complex autonomous deployments. With the challenge of horizontal scale of arguably behind them, CF users are starting to build more complex architectures. This includes dynamic provisioning of the providers (like data bases, object stores and other persistent adjacent services) and connecting to containerized “micro-services.” (see Matt Stine’s preso)

While this is a natural evolution, it adds an order of magnitude more complexity because the contracts between previously isolated layers are suddenly not reliable.

For example, what happens to a CF deployment when the database provider is field upgraded to a new version. That could introduce breaking changes in dependent applications that are completely opaque to the data provider. These are hard problems to solve.

Happily, that’s exactly the discussions that we’re starting to have with container orchestration systems. It’s also part of the dialog that I’ve been trying to drive with Functional Operations (FuncOps Preso) on the physical automation side. I’m optimistic that CloudFoundry patterns will help make this problem more tractable.

OpenCrowbar has been using Consul more and more deeply. We’ve reached the point where we must register services on Consul to pass automated tests.

Consequently, I had to write a little Consul client in Erlang.

The client is very basic, but it seems to perform all of the required functions. It relies on some other libraries in OpenCrowbar’s BDD but they are relatively self-contained. Pulls welcome if you’d like to help build this out.

After writing pages of notes about the impact of Docker, microservice architectures, mainstreaming of Ops Automation, software defined networking, exponential data growth and the explosion of alternative hardware architecture, I realized that it all boils down to the death of cloud as we know it.

OK, we’re not killing cloud per se this year. It’s more that we’ve put 10 pounds of cloud into a 5 pound bag so it’s just not working in 2015 to call it cloud.

Cloud was happily misunderstood back in 2012 as virtualized infrastructure wrapped in an API beside some platform services (like object storage).

That illusion will be shattered in 2015 as we fully digest the extent of the beautiful and complex mess that we’ve created in the search for better scale economics and faster delivery pipelines. 2015 is going to cause a lot of indigestion for CIOs, analysts and wandering technology executives. No one can pick the winners with Decisive Leadership™ alone because there are simply too many possible right ways to solve problems.

Here’s my list of the seven cloud disrupting technologies and frameworks that will gain even greater momentum in 2015:

Docker – I think that Docker is the face of a larger disruption around containers and packaging. I’m sure Docker is not the thing alone. There are a fleet of related technologies and Docker replacements; however, there’s no doubt that it’s leading a timely rethinking of application life-cycle delivery.

New languages and frameworks – it’s not just the rapid maturity of Node.js and Go, but the frameworks and services that we’re building (like Cloud Foundry or Apache Spark) that change the way we use traditional languages.

Microservice architectures – this is more than containers, it’s really Functional Programming for Ops (aka FuncOps) that’s a new generation of service oriented architecture that is being empowered by container orchestration systems (like Brooklyn or Fleet). Using microservices well seems to redefine how we use traditional cloud.

Mainstreaming of Ops Automation – We’re past “if DevOps” and into the how. Ops automation, not cloud, is the real puppies vs cattle battle ground. As IT creates automation to better use clouds, we create application portability that makes cloud disappear. This freedom translates into new choices (like PaaS, containers or hardware) for operators.

Software defined networking – SDN means different things but the impacts are all the same: we are automating networking and integrating it into our deployments. The days of networking and compute silos are ending and that’s going to change how we think about cloud and the supporting infrastructure.

Exponential data growth – you cannot build applications or infrastructure without considering how your storage needs will grow as we absorb more data streams and internet of things sources.

Explosion of alternative hardware architecture – In 2010, infrastructure was basically pizza box or blade from a handful of vendors. Today, I’m seeing a rising tide of alternatives architectures including ARM, Converged and Storage focused from an increasing cadre of sources including vendors sharing open designs (OCP). With improved automation, these new “non-cloud” options become part of the dynamic infrastructure spectrum.

Today these seven items create complexity and confusion as we work to balance the new concepts and technologies. I can see a path forward that redefines IT to be both more flexible and dynamic while also being stable and performing.