Tag Archives: Open Infrastructure

Note: OpenStack voting is limited to community members – if you registered by the deadline, you will receive your unique ballot by email. You have 8 votes to distribute as you see fit.

I believe open infrastructure software is essential for our IT future.

Open source has been a critical platform for innovation and creating commercial value for our entire industry; however, we have to deliberately foster communities for open source activities that connect creators, users and sponsors. OpenStack has built exactly that for people interested in infrastructure and that is why I am excited to run for the Foundation Board again.

OpenStack is at a critical juncture in transitioning from a code focus to a community focus.

We must allow the OpenStack code to consolidate around a simple mission while the community explores adjacent spaces. It will be a confusing and challenging transition because we’ll have to create new spaces that leave part of the code behind – what we’d call the Innovator’s Dilemma inside of a single company. And, I don’t think OpenStack has a lot of time to figure this out.

That change requires both strong and collaborative leadership by people who know the community but are not too immersed in the code.

I am seeking community support for my return to the OpenStack Foundation Board. In the two years since I was on the board, I’ve worked in the Kubernetes community to support operators. While on the board, I fought hard to deliver testable interoperability (DefCore) and against expanding the project focus (Big Tent). As a start-up and open source founder, I bring a critical commercial balance to a community that is too easily dominated by large vendor interests.

Re-elected or not, I’m a committed member of the OpenStack community who is enthusiastically supporting the new initiatives by the Foundation. I believe strongly that our industry needs to sponsor and support open infrastructure. I also believe that dominate place for OpenStack IaaS code has changed and we also need to focus those efforts to be highly collaborative.

OpenStack cannot keep starting with “use our code” – we have to start with “let’s understand the challenges.” That’s how we’ll keep building an strong open infrastructure community.

If these ideas resonate with you, then please consider supporting me for the OpenStack board. If they don’t, please vote anyway! There are great candidates on the ballot again and voting supports the community.

IT is subject to seismic shifts right now. Here’s how we cope together.

For a long time, I’ve advocated for open operations (“OpenOps”) as a way to share best practices about running data centers. I’ve worked hard in OpenStack and, recently, Kubernetes communities to have operators collaborate around common architectures and automation tools. I believe the first step in these efforts starts with forming a community forum.

Here’s how Dean Nelson, IM organizer and head of Uber Compute, describes the initiative:

An Infrastructure Mason Partner is a professional who develop products, build or support infrastructure projects, or operate infrastructure on behalf of end users. Like their end users peers, they are dedicated to the advancement of the Industry, development of their fellow masons, and empowering business and personal use of the infrastructure to better the economy, the environment, and society.

We’re in the midst of tremendous movement in IT infrastructure. The change to highly automated and scale-out design was enabled by cloud but is not cloud specific. This requirement is reshaping how IT is practiced at the most fundamental levels.

We (IT Ops) are feeling amazing pressure on operations and operators to accelerate workflow processes and innovate around very complex challenges.

Open operations loses if we respond by creating thousands of isolated silos or moving everything to a vendor specific island like AWS. The right answer is to fund ways to share practices and tooling that is tolerant of real operational complexity and the legitimate needs for heterogeneity.

Interested in more? Get involved with the group! I’ll be sharing more details here too.

The RackN team has been working on making DevOps more portable for over five years. Portable between vendors, sites, tools and operating systems means that our automation needs be to hybrid in multiple dimensions by design.

I believe that application should drive the infrastructure, not the reverse. I’ve heard may times that the “infrastructure should be invisible to the user.” Unfortunately, lack of abstraction and composibility make it difficult to code across platforms. I like the term “fidelity gap” to describe the cost of these differences.

Everyone wants to get stuff done quickly; however, we make the same hard-coded ops choices over and over again. Big bang configuration automation that embeds sequence assumptions into the script is not just technical debt, it’s fragile and difficult to upgrade or maintain. The problem is not configuration management (that’s a critical component!), it’s the lack of system level tooling that forces us to overload the configuration tools.

My ops automation experience says that these four factors must be solved together because they are interconnected.

What would a platform that embraced all these ideas look like? Here is what we’ve been working towards with Digital Rebar at RackN:

Mono-Infrastructure IT

“Hybrid DevOps”

Locked into a single platform

Portable between sites and infrastructures with layered ops abstractions.

Limited interop between tools

Adaptive to mix and match best-for-job tools. Use the right scripting for the job at hand and never force migrate working automation.

Ad hoc security based on site specifics

Secure using repeatable automated processes. We fail at security when things get too complex change and adapt.

Difficult to reuse ops tools

Composable Modules enable Ops Pipelines. We have to be able to interchange parts of our deployments for collaboration and upgrades.

Fragile Configuration Management

Service Oriented simplifies API integration. The number of APIs and services is increasing. Configuration management is not sufficient.

Big bang: configure then deploy scripting

Orchestrated action is critical because sequence matters. Building a cluster requires sequential (often iterative) operations between nodes in the system. We cannot build robust deployments without ongoing control over order of operations.

Should we call this “Hybrid Devops?” That sounds so buzz-wordy!

I’ve come to believe that Hybrid DevOps is the right name. More technical descriptions like “composable ops” or “service oriented devops” or “cross-platform orchestration” just don’t capture the real value. All these names fail to capture the portability and multi-system flavor that drives the need for user control of hybrid in multiple dimensions.

Like my previous DefCore interop windmill tilting, this is not something that can be done alone. Open infrastructure is a collaborative effort and I’m looking for your help and support. I believe solving this problem benefits us as an industry and individually as IT professionals.

So, what is open infrastructure? It’s not about running on open source software. It’s about creating platform choice and control. In my experience, that’s what defines open for users (and developers are not users).

I’ve spent several years helping lead OpenStack interoperability (aka DefCore) efforts to ensure that OpenStack cloud APIs are consistent between vendors. I strongly believe that effort is essential to build an ecosystem around the project; however, in talking to enterprise users, I’ve learned that that their real interoperability gap is between that many platforms, AWS, Google, VMware, OpenStack and Metal, that they use everyday.

Instead of focusing inward to one platform, I believe the bigger enterprise need is to address automation across platforms. It is something I’m starting to call hybrid DevOps because it allows users to mix platforms, service APIs and tools.

Open infrastructure in that context is being able to work across platforms without being tied into one platform choice even when that platform is based on open source software. API duplication is not sufficient: the operational characteristics of each platform are different enough that we need a different abstraction approach.

We have to be able to compose automation in a way that tolerates substitution based on infrastructure characteristics. This is required for metal because of variation between hardware vendors and data center networking and services. It is equally essential for cloud because of variation between IaaS capabilities and service delivery models. Basically, those minor differences between clouds create significant challenges in interoperability at the operational level.

Rationalizing APIs does little to address these more structural differences.

The problem is compounded because the differences are not nicely segmented behind abstraction layers. If you work to build and sustain a fully integrated application, you must account for site specific needs throughout your application stack including networking, storage, access and security. I’ve described this as all deployments have 80% of the work common but the remaining 20% is mixed in with the 80% instead of being nicely layers. So, ops is cookie dough not vinaigrette.

Getting past this problem for initial provisioning on a single platform is a false victory. The real need is portable and upgrade-ready automation that can be reused and shared. Critically, we also need to build upon the existing foundations instead of requiring a blank slate. There is openness value in heterogeneous infrastructure so we need to embrace variation and design accordingly.

This is the vision the RackN team has been working towards with open source Digital Rebar project. We now able to showcase workload deployments (Docker, Kubernetes, Ceph, etc) on multiple cloud platforms that also translate to full bare metal deployments. Unlike previous generations of this tooling (some will remember Crowbar), we’ve been careful to avoid injecting external dependencies into the DevOps scripts.

While we’re able to demonstrate a high degree of portability (or fidelity) across multiple platforms, this is just the beginning. We are looking for users and collaborators who want to want to build open infrastructure from an operational perspective.

You are invited to join us in making open cross-platform operations a reality.