Tag Archives: OpenStack

Post navigation

Note: OpenStack voting is limited to community members – if you registered by the deadline, you will receive your unique ballot by email. You have 8 votes to distribute as you see fit.

I believe open infrastructure software is essential for our IT future.

Open source has been a critical platform for innovation and creating commercial value for our entire industry; however, we have to deliberately foster communities for open source activities that connect creators, users and sponsors. OpenStack has built exactly that for people interested in infrastructure and that is why I am excited to run for the Foundation Board again.

OpenStack is at a critical juncture in transitioning from a code focus to a community focus.

We must allow the OpenStack code to consolidate around a simple mission while the community explores adjacent spaces. It will be a confusing and challenging transition because we’ll have to create new spaces that leave part of the code behind – what we’d call the Innovator’s Dilemma inside of a single company. And, I don’t think OpenStack has a lot of time to figure this out.

That change requires both strong and collaborative leadership by people who know the community but are not too immersed in the code.

I am seeking community support for my return to the OpenStack Foundation Board. In the two years since I was on the board, I’ve worked in the Kubernetes community to support operators. While on the board, I fought hard to deliver testable interoperability (DefCore) and against expanding the project focus (Big Tent). As a start-up and open source founder, I bring a critical commercial balance to a community that is too easily dominated by large vendor interests.

Re-elected or not, I’m a committed member of the OpenStack community who is enthusiastically supporting the new initiatives by the Foundation. I believe strongly that our industry needs to sponsor and support open infrastructure. I also believe that dominate place for OpenStack IaaS code has changed and we also need to focus those efforts to be highly collaborative.

OpenStack cannot keep starting with “use our code” – we have to start with “let’s understand the challenges.” That’s how we’ll keep building an strong open infrastructure community.

If these ideas resonate with you, then please consider supporting me for the OpenStack board. If they don’t, please vote anyway! There are great candidates on the ballot again and voting supports the community.

While the RackN team and I have been heads down radically simplifying physical data center automation, I’ve still been tracking some key cloud infrastructure areas. One of the more interesting ones to me is Edge Infrastructure.

This once obscure topic has come front and center based on coming computing stress from home video, retail machine and distributed IoT. It’s clear that these are not solved from centralized data centers.

While I’m posting primarily on the RackN.com blog, I like to take time to bring critical items back to my personal blog as a collection. WARNIING: Some of these statements run counter to other industry. Please let me know what you think!

By far the largest issue of the Edge discussion was actually agreeing about what “edge” meant. It seemed as if every session had a 50% mandatory overhead in definitioning. Putting my usual operations spin on the problem, I choose to define edge infrastructure in data center management terms. Edge infrastructure has very distinct challenges compared to hyperscale data centers. Read article for the list...

Running each site as a mini-cloud is clearly not the right answer. There are multiple challenges here. First, any scale infrastructure problem must be solved at the physical layer first. Second, we must have tooling that brings repeatable, automation processes to that layer. It’s not sufficient to have deep control of a single site: we must be able to reliably distribute automation over thousands of sites with limited operational support and bandwidth. These requirements are outside the scope of cloud focused tools.

If “cloudification” is not the solution then where should we look for management patterns? We believe that software development CI/CD and immutable infrastructure patterns are well suited to edge infrastructure use cases. We discussed this at a session at the OpenStack OpenDev Edge summit.

What do YOU think? This is an evolving topic and it’s time to engage in a healthy discussion.

TL;DR: Operators (DevOps & SREs) have a hard job, we need to make time and room for them to redefine their jobs in a much more productive way.

The Cloudcast.net by Brian Gracely and Aaron Delp brings deep experience and perspective into their discussions based on their impressive technology careers and understanding of the subject matter. Their podcasts go deep quickly with substantial questions that get to the heart of the issue. This was my third time on the show (previous notes).

In episode 301, we go deeply into the meaning and challenges for Site Reliability Engineering (SRE) functions. We also cover some popular technologies that are of general interest.

2:30 Google’s SRE book gave a name, even changed the definition, to what I’ve been doing my whole career. Evolved name from being just about sites to a full system perspective.

3:30 SRE and DevOps are aligned at the core. While DevOps is about process and culture, SRE is more about the function and “factory.”

4:30 Developers don’t want to be shoving coal into the engine, but someone, SREs, have to make sure that everything keeps running

5:15 Brian asks about impedance mismatch between Dev and Ops. How do we fix that?

6:30 Rob talks about the crisis brewing for operations innovation gap (link). Digital Rebar is designed to create site-to-site automation so Operators can share repeatable best practices.

7:30 OpenStack ran aground because Operators because we never created a the practices that could be repeated. “Managed service as the required pattern is a failure of building good operational software.”

8:00 RackN decomposes operations into isolated units so that individual changes don’t break the software on top

9:20 Brian talks about the increasing rate of releases means that operations doesn’t have the skills to keep up with patching.

10:10 That’s “underlay automation” and even scarier because software is composited with all sorts of parts that have their own release cycles that are not synchronized.

11:30 We need to get system level patch/security.update hygiene to be automatic

12:20 This is really hard!

13:00 Brian asks what are the baby steps?

13:20 We have to find baby steps where there are nice clean boundaries at every layer from the very most basic. For RackN, that’s DHCP and PXE and then upto Kubernetes.

15:15 Rob rants that renaming Ops teams as SRE is a failure because SRE has objectives like job equity that need to be included.

16:00 Org silos get in the way of automation that have antibodies that make it difficult for SREs and DevOps to succeed.

17:10 Those people have to be empowered to make change

17:40 The existing tools must be pluggable or you are hurting operators. There’s really no true greenfield, so we help people by making things work in existing data centers.

19:00 Scripts may have technical debt but that does not mean they should just be disposed.

19:20 New and shiney does not equal better. For example, Container Linux (aka CoreOS) does not solve all problems.

20:10 We need to do better creating bridges between existing and new.

20:40 How do we make Day 2 compelling?

21:15 Brian asks about running OpenStack on Kubernetes.

22:00 Rob is a fan of Kubernetes on Metal, but really, we don’t want metal and vms to be different. That means that Kubernetes can be a universal underlay which is threatening to OpenStack.

23:00 This is no longer a JOKE: “Joint OpenStack Kubernetes Environments”

23:30 Running things on Kubernetes (or OpenStack) is great because the abstractions hide complexity of infrastructure; however, at the physical layer you need something that exposes that complexity (which is what RackN does).

25:00 Brian asks at what point do you need to get past the easy abstractions

25:30 You want to never care ever. But sometimes you need the information for special cases.

26:20 We don’t want to make the core APIs complex just to handle the special cases.

27:00 There’s still a class of people who need to care about hardware. These needs should not be embedded into the Kubernetes (or OpenStack) API.

28:00 Brian summarizes that we should not turn 1% use cases into complexity for everyone. We need to foster the skill of coding for operators

28:45 For SREs, turning Operators into coding & automation is essential. That’s a key point in the 50% programming statement for SREs.

We’re very invested in talking about SRE and want to hear from you! How is your company transforming operations work to make it more sustainable, robust and human?We want to hear your stories and questions.

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

“We’re going to keep solving problems in and around the OpenStack community. I’m excited to see the Foundation embracing that mission. There are still many hard decisions to make. For example, I believe that Kubernetes as an underlay is compelling for operators and will drive the OpenStack code base into a more limited role as a Kubernetes workload (check out my presentation about that at Boston). While that may refocus the coding efforts, I believe it expands the relevance of the open infrastructure community we’ve been building.

Building infrastructure software is hard and complex. It’s better to do it with friends so please join me in helping keep these open operations priorities very much alive.”

“Sometimes paradigm changes demand a rapid response and I believe unifying OpenStack services under Kubernetes has become an such an urgent priority that we must freeze all other work until this effort has been completed.”

“It’s essential to solve these problems in an open way so that we can work together as a community of operators.”

As you would expect, RackN is very interested in your thoughts on this proposal and its impact not only on the OpenStack and Kubernetes communities but also how it can transform the ability of IT infrastructure teams to deploy complex technologies in a reliable and scalable manner.

Using Containers and Kubernetes to Enable the Development of Object-Oriented Infrastructure: Brendan Burns GlueCon Presentation

Is SRE a Good Term?
Interview with Rob Hirschfeld (RackN) and Charity Majors (Honeycomb) at Gluecon 2017

_____________

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

TL;DR: infrastructure operations is hard and we need to do a lot more to make these systems widely accessible, easy to sustain and lower risk. We’re discussing these topics on twitter…please join in. Themes include “do we really have consensus and will to act” and “already a solved problem” and “this hurts OpenStack in the end.”

It’s essential to solve these problems in an open way so that we can work together as a community of operators.

It feels like developers are quick to rally around open platforms and tools while operators tend to be tightly coupled to vendor solutions because operational work is tightly coupled to infrastructure. From that perspective, I’m been very involved in OpenStack and Kubernetes open source infrastructure platforms because I believe the create communities where we can work together.

This week, I posted connected items on VMblog and RackN that layout a position where we bring together these communities.

Of course, I do have a vested interest here. Our open underlay automation platform, Digital Rebar, was designed to address a missing layer of physical and hybrid automation under both of these projects. We want to help accelerate these technologies by helping deliver shared best practices via software. The stack is additive – let’s build it together.

I’m very interested in hearing from you about these ideas here or in the context of the individual posts. Thanks!

TL;DR: Sometimes paradigm changes demand a rapid response and I believe unifying OpenStack services under Kubernetes has become an such an urgent priority that we must freeze all other work until this effort has been completed.

By design, OpenStack chose to be unopinionated about operations.

That made sense for a multi-vendor project that was deeply integrated with the physical infrastructure and virtualization technologies. The cost of that decision has been high for everyone because we did not converge to shared practices that would drive ease of operations, upgrade or tuning. We ended up with waves of vendors vying to have the the fastest, simplest and openest version.

Tragically, install became an area of competition instead an area of collaboration.

Containers and microservice architecture (as required for Kubernetes and other container schedulers) is providing an opportunity to correct this course. The community is already moving towards containerized services with significant interest in using Kubernetes as the underlay manager for those services. I’ve laid out the arguments for and challenges ahead of this approach in other places.

These technical challenges involve tuning the services for cloud native configuration and immutable designs. They include making sure the project configurations can be injected into containers securely and the infra-service communication can handle container life-cycles. Adjacent concerns like networking and storage also have to be considered. These are all solvable problems that can be more quickly resolved if the community acts together to target just one open underlay.

The critical fact is that the changes are manageable and unifying the solution makes the project stronger.

Using Kubernetes for OpenStack service management does not eliminate or even solve the challenges of deep integration. OpenStack already has abstractions that manage vendor heterogeneity and those abstractions are a key value for the project. Kubernetes solves a different problem: it manages the application services that run OpenStack with a proven, understood pattern. By adopting this pattern fully, we finally give operators consistent, shared and open upgrade, availability and management tooling.

Having a shared, open operational model would help drive OpenStack faster.

There is a risk to this approach: driving Kubernetes as the underlay for OpenStack will force OpenStack services into a more narrow scope as an infrastructure service (aka IaaS). This is a good thing in my opinion. We need multiple abstractions when we build effective IT systems.

The idea that we can build a universal single abstraction for all uses is a dangerous distraction; instead; we need to build platform layers collaborativity.

My call for a Kubernetes underlay pivot embraces that collaborative approach. If we can keep these platforms focused on their core value then we can build bridges between what we have and our next innovation. What do you think? Is this a good approach? Contact us if you’d like to work together on making this happen.

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

We’re excited to see the breadth of platforms enabled by Kargo and how well it handles a wide range of options like integrating Ceph for StatefulSet persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the OpenStack Helm charts (demo video). READ MORE___________

There’s a frustrating cyberattack driven security awareness cycle in IT Operations. Exploits and vulnerabilities are neither new nor unexpected; however, there is a new element taking shape that should raise additional alarm.Cyberattacks are increasingly profit generating and automated. READ MORE_____________

Being a Site Reliability Engineer (SRE) means having to talk about hard problems. Site outages, complex failure scenarios, and other technical emergencies are the things we have to be prepared to deal with every day. When we’re not dealing with problems, we’re discussing them. We regularly perform post-mortems and root cause analyses, and we generally dig into complex technical problems in an unflinching way. READ MORE_____________Virtual Panel: OpenStack Summit Boston 2017 Debriefing_____________

Just a few days before he died at the beginning of the 1990s, a wise man taught us that “the show must go on.” Freddie Mercury’s parting words have long provided the guiding light for many, if not all, ops teams. In their eyes, the production environment should be exposed to minimum risk, even at the expense of new features and problem resolution.

About 10 years ago, Google decided to change its approach to production management. It took the company only a few years to realize that while R&D focused on creating new features and pushing them to production, the Operations group was trying to keep production as stable as possible—the two teams were pulling in opposite directions. This tension arose due to the groups’ different backgrounds, skill sets, incentives and metrics by which they were measured. READ MORE_____________

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.