Tag Archives: Provisioning

Post navigation

Welcome to our new format for the RackN and Digital Rebar Weekly Review. It contains the same great information you are accustomed to; however, I have reorganized it to place a new section at the start with my thoughts on various topics. You can still find the latest news items related to Edge, DevOps and other relevant topics below.

Building Bridges between Operators, Developers and Architects

DevOps is not enough. Infrastructure architects play a key role and are often not considered. It is our experience that bringing the architect together with DevOps team leads to the optimal solution. Hear from Rob Hirschfeld on why this is critical for success:

Redefining PXE Provisioning for the Modern Data Center

Over the past 20 years, Linux admins have defined provisioning with a limited scope; PXE boot with Cobbler. This approach continues to be popular today even though it only installs an operating system limiting the operators’ ability to move beyond this outdated paradigm.

Digital Rebar is the answer operators have been looking for as provisioning has taken on a new role within the data center to include workflow management, infrastructure automation, bare metal, virtual machines inside and outside the firewall as well as the coming need for edge IoT management.

Re-defining physical automation to make it highly repeatable and widely consumable while also meeting the necessarily complex and evolving heterogeneous data center environment is the challenge the RackN team is solving. To meet this challenge, we have developed a unique philosophy in how we build our technology; both open sourceDigital Rebar and the additional RackN packages.

Stand-alone Provisioning

Building Software from the API

Single Golang Executable

Modular Components – Composable Content

Operator Defined Workflows

Immutable Infrastructure

Distributed or Consolidated Architectures

Stand-alone Provisioning

It is critical that Digital Rebar Provision (DRP) provides operators the maximum flexibility in terms of where to run the service (Server, Top-of-Rack Switch, ARM, Intel, etc) as well as removal of any dependencies that might restrict its deployment. Each environment has it’s own unique Infrastructure DNA; the hardware, operating systems, and application stacks that drive the Infrastructure underlay.

Building Software from the API

The Digital Rebar Provision solution is built with an API first mentality. Features and enhancements are implemented as an API (making it a first-class citizen), and the CLI is dynamically generated from the API which insures 100% coverage of API implementations within the CLI.

This methodology also allows for the CLI to directly follow the structure and syntax of the API, making it easy for an Operator or Developer to understand and flexibly interchange the API and CLI syntax.

At RackN we believe in strongly in the 12-Factor App methodology for designing modern software. DRP is a direct reflection of these principles.

Single Golang Executable

DRP is built with Golang which is a modern Procedural language that is easily cross-compiled for multiple operating systems and processor architectures. As a benefit, the DRP service and CLI tool (dr-provision and drpcli respectively) can run on platforms that range from small Raspberry Pi embedded systems, network switches at the Top-of-Rack, huge Hyper Converged Infrastructure (HCI) servers, to everything in between. It is currently compiled and runs on Linux (arm, intel, 32 bit, and 64 bit), Mac OS X (64 bit), and Windows (64 bit).

The dr-provision binary is very small and lightweight, requiring almost zero external dependencies. Current external dependencies are unzip, pk7zip, and bsdtar, and these dependencies should be removed in a future version. At only 30 MByte in size, it requires fairly little resources to run.

Modular Components ~ Composable Content

Modular architecture allows us to create complex solutions from a set of simple building blocks that offer functionality that is well tested. Breaking complex problems down in to small components, and then allowing strong templating capabilities creates a structure that allows for strong reuse patterns. This approach permeates all of the “Content” components that create the foundational building blocks for composable provisioning activities.

Operator Defined Workflows

Each environment has a unique set of services, applications, tooling, and practices for managing the Infrastructure. Taking the concepts of Composable Content, we allow an operator or developer a flexible structure in which they have control in determining how loosely or tightly to integrate the DRP provisioning services in to their environment. Every customer environment has a unique set of tools, and this methodology allows for smooth integration with those operational principles

Immutable Infrastructure

Maintaining hardware and software in a massive data center or cloud is a significant challenge without the additional overhead of ensuring that patches are properly applied. Any changes to an active solution can introduce complications on a live system which is a major barrier to having security updates and other patches completed in a timely manner.

A better method is to only deploy a “golden image” to the live system and rather than patch each individual instance, simply tear down the instance and replace with a new copy of the “golden image.” All patches can be applied and tested to create a new golden image which is easily rolled out in the create – destroy- re-create model of immutability.

Distributed or Consolidated Architectures

Traditional data center and lab environments utilize centralized provisioning services. While DRP has strong support for this scale-up or consolidated model, shifting patterns in application and service deployment topology dictates an evolving provisioning service solution. Current Internet-of-Things (IoT), Edge, and Fog architectures distribute resources across disperse environments.In the traditional model, a large scale operator might support a handful of datacenters with 10s of thousands of hosts in each facility. These new trending architecture patterns can encompass 1000s of different locations, each hosting a few dozen to a few hundred hosts. This shift creates significant burden on operational and infrastructure management tooling to support the complexities of these scale-out designs.With strong multi-endpoint management tooling, the RackN portal can easily support both models for provisioning. Long-lived scale-up environments with a service that is updated, upgraded, managed, loved, and cared for can exist seamlessly alongside environments with a create/destroy pattern that treats 1000s of provisioning endpoints as disposable assets.

Our architectural plans for Digital Rebar are beyond big – they are for massive distributed scale. Not up, but out. We are designing for the case where we have common automation content packages distributed over 100,000 stand-alone sites (think 5G cell towers) that are not synchronously managed. In that case, there will be version drift between the endpoints and content. For example, we may need to patch an installation script quickly over a whole fleet but want to upgrade the endpoints more slowly.

It’s also part of the RackN move into a biweekly release cadence for Digital Rebar. That means that we are iterating from tip development to stable every two weeks. It’s fast because we don’t want operators deploying the development “tip” to access features or bug fixes.

This works for several reasons. First, much of the Digital Rebar value is delivered as content instead of in the scaffolding. Each content package has it’s own version cycle and is not tied to Digital Rebar versions. Second, many Digital Rebar features are relatively small, incremental additions. Faster releases allows content creators and operators to access that buttery goodness more quickly without trying to manage the less stable development tip.

Critical enablers for this release pace are feature flags. Starting in v3.2, Digital Rebar introduced the system level tags that are set when new features are added. These flags allow content developers to introspect the system in a multi-version way to see which behaviors are available in each endpoint. This is much more consistent and granular than version matching.

We are not designing a single endpoint system: we are planning for content that spans 1,000s of endpoints.

Feature flags are part of our 100,000 endpoint architecture thinking. In large scale systems, there can be significant version drift within a fleet deployment. We have to expect that automation designers want to enable advanced features before they are universally deployed in the fleet. That means that the system needs a way to easily advertise specific capabilities internally. Automation can then be written with different behaviors depending on the environment. For example, changing exit codes could have broken existing scripts except that scripts used flags to determine which codes were appropriate for the system. These are NOT API issues that work well with semantic versioning (semver), they are deeper system behaviors.

This matters even if you only have a single endpoint because it also enables sharing in the Digital Rebar community.

Without these changes, composable automation designed for the Digital Rebar community would quickly become very brittle and hard to maintain. Our goal is to ensure a decoupling of endpoint and content. This same benefit allows the community to share packages and large scale sites to coordinate upgrades. I don’t think that we’re done yet. This is a hard problem and we’re still evolving all the intricacies of updating and delivering composable automation.

It’s the type of complex, operational thinking that excites the RackN engineering team. I hope it excites you too because we’d love to get your thinking on how to make it even better!

The embedded video is an excellent RackN and Digital Rebar overview created by Rob Hirschfeld and Greg Althaus, co-founders of RackN on the critical issue facing data center operations teams. Their open-source based offering completes the integration challenge existing between platforms/orchestration tools and control/provision technology.

By integrating with the platform and orchestration solutions, RackN is able to replace the control and provisioning tools without adding complexity or replacing established technology.

Watch the complete video below as Rob Hirschfeld provides the background of how RackN arrived at the current offering and the benefits for data center operators to support bare metal provisioning as well as immutable infrastructure. (Slides)

In this podcast, Stephen Spector, HPE Cloud Evangelist and Greg Althaus, Co-Founder and CTO RackN, talk about the integration point for Digital Rebar Provisioning with the Terraform solution. The specific focus is on delivering bare metal provisioning to users of Terraform.

Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Cloud Native thinking is thankfully changing the way we approach traditional IT infrastructure. These profound changes in how we build applications with 12-factor design and containers has deep implications on how we manage configuration and the tools we use to do it. These are not cloud only impacts – the changes impact every corner of IT data centers.

“You still have to do configuration management but… we’re getting to a point we can do a lot less” (8:30)

Configuration Management is both necessary and very hard. I’ve written and spoken about the developer rebellion against Infrastructure (and will again at DOD Dallas!). The TL;DR on that lightning talk is “infrastructure sucks.”

In this podcast, Eric and I have time to stretch out and really discuss what’s going on with in both broad and specific terms. At the 15 minute mark, we start talking about how “radical simplicity” is coming to provisioning and deployment automation. We break down how the business needs for repeatable and robust automation are driving IT to rethinking huge swaths of their infrastructures. That transitions into making a whole data center into a CI/CD pipeline.