Category Archives: Kubernetes

Post navigation

Watch this webinar to learn more about the RackN Kubernetes installation integration using community tools like Kubeadm demonstrated at this week’s KubeCon event (Slides) in Austin, TX. Co-Founders Rob Hirschfeld and Greg Althaus of RackN will discuss this fast and simple approach to operating Kubernetes. Of course, we’ll also demonstrate the technology installing Kubernetes following the immutable infrastructure model highlighting the automated provisioning technology built on the open source Digital Rebar project.

After this webinar, you’ll be prepared to attempt this install strategy on your own.

The RackN and Digital Rebar team are finalizing plans for next week’s KubeCon + CloudNativeCon in Austin, TX from Dec 6 – 8, 2017. Rob Hirschfeld is hosting 2 sessions and we are having a booth in the sponsor showcase. All the info you need is below and we look forward to seeing you in Austin.

Operators of Kubernetes, Unite! SIG Cluster Ops was formed nearly two years ago with the goal of being an installer neutral place for operations to collaborate. Frankly, we’ve had challenges getting critical mass because operators cluster around their installer groups. This session will discuss re-chartering as a Working Group and review the mission of the group. We’ll also review plans for the next 6 months. If you’re hoping Kubernetes can limit the installer explosion then this session is a good one for you too.

In recent releases, we’ve enabled node admission and configuration APIs that eliminate configuration requirements for Kubernetes workers. This allows cluster operators to add and remove nodes from clusters without a configuration management tool driving the process. This fully automated node management behavior allows physical data centers to be much more cloud-lie and lights-out.

In this session, we’ll run this process as a demo and decompose the various parts that must work together for success. We’ll discuss the specific APIs and how to implement them in a coordinated way that ensures node security and minimizes workload disruption. We’ll also discuss how to improve node security by using trusted platform modules (TPM). By the end of the session, operators will be able to duplicate the steps on their own to learn the process.

While we focus on bare metal infrastructure for this session, the lessons learned are equally useable on cloud infrastructure.

In this week’s L8ist Sh9y podcast Krishnan Subramanian, Founder and Chief Research Advisor of Rishidot Research talks about Edge Computing, the Kubernetes Ecosystem and the Composable Enterprise. Key highlights:

“Multi-Cloud is the foundation of Modern Enterprise” – Krishnan

Kubernetes ecosystem and the possibility that Serverless could replace it

IT innovation requires a composable and layered approach, without this approach IT will find themselves trapped in a hard-wired infrastructure unable to move forward

Krishnan Subramanian (a.k.a Krish) is a well-known expert in the field of cloud computing. He is the founder and Chief Research Advisor at Rishidot Research, a boutique analyst firm focused on Modern Enterprise. Their open data-based research helps enterprise decision makers on their enterprise modernization strategy. His Modern Enterprise model helps enterprises innovate rapidly by transforming their IT as the core part of the innovation team. He was a speaker and panelist at various cloud computing conferences and he was also an advisor for Glue conference in 2011 and Cloud Connect Santa Clara in 2012. He has also organized industry-leading conferences like Deploycon and Cloud2020. He is also an advisor to cloud computing startups. He can be reached on Twitter @krishnan.

The RackN team is proud of saying that we left the Orchestration out when we migrated from Digital Rebar v2 to v3. That would mean more if anyone actually agreed on what orchestration means… In this our case, I think we can be pretty specific: Digital Rebar v3 does not manage work across multiple nodes. At this point, we’re emphatic about it because cross machine actions add a lot of complexity and require application awareness that quickly blossoms into operational woe, torture and frustration (aka WTF).

In the latest releases (v3.2+), we’ve delivered an easy to understand stage and task running system that is simple to extend, transparent in operation and extremely fast. There’s no special language (DSL) to learn or database to master. And if you need those things, then we encourage you to use the excellent options from Chef, Puppet, SaltStack, Ansible and others. This is because our primary design focus is planning work over multiple boots and operating system environments instead of between machines. Digital Rebar shines when you need 3+ reboots to automatically scrub, burn-in, inventory, install and then post-configure a machine.

But we may have crossed an orchestration line with our new cluster token capability.

Starting in the v3.4 release, automation authors will be able to use a shared profile to coordinate work between multiple machines. This is not a Digital Rebar feature per se – it’s a data pattern that leverages Digital Rebar locking, profiles and parameters to share information between machines. This allows scripts to elect leaders, create authoritative information (like tokens) and synchronize actions. The basic mechanism is simple: we create a shared machine profile that includes a token that allows editing the profile. Normally, machines can only edit themselves so we have to explicitly enable editing profiles with a special use token. With this capability, all the machines assigned to the profile can update the profile (and only that profile). The profile becomes an atomic, secure shared configuration space.

For example, when building a Kubernetes cluster using Kubeadm, the installation script needs to take different actions depending on which node is first. The first node needs to initialize the cluster master, generate a token and share its IP address. The subsequent nodes must wait until the master is initialized and then join using the token. The installation pattern is basically a first-in leader election while all others wait for the leader. There’s no need for more complex sequencing because the real install “orchestration” is done after the join when Kubernetes starts to configure the nodes.

Our experience is that recent cloud native systems are all capable of this type of shotgun start where all the nodes start in parallel with the minimal bootstrap coordination that Digital Rebar can provide.

Individually, the incremental features needed to enable cluster building were small additions to Digital Rebar. Together, they provide a simple yet powerful management underlay. At RackN, we believe that simple beats complex everyday and we’re fighting hard to make sure operations stays that way.

OpenStack is a real platform doing real work for real users. So why does OpenStack have a reputation for not working? It falls into the lack of core-focus paradox: being too much to too many undermines your ability to do something well. In this case, we keep conflating the community and the code.

I have a long history with the project but have been pretty much outside of it (yay, Kubernetes!) for the last 18 months. That perspective helps me feel like I’m getting closer to the answer after spending a few days with the community at the latest OpenStack Summit in Sydney Australia. While I love to think about the why, the what the leaders are doing about it is very interesting too.

Fundamentally, OpenStack’s problem is that infrastructure automation is too hard and big to be solved within a single effort.

It’s so big that any workable solution will fail for a sizable number of hopeful operators. That does not keep people from the false aspiration that OpenStack code will perfectly fit their needs (especially if they are unwilling to trim their requirements).

But the problem is not inflated expectations for OpenStack VM IaaS code, it’s that we keep feeding them. I have been a long time champion for a small core with a clear ecosystem boundary. When OpenStack code claims support for other use cases, it invites disappointment and frustration.

So why is OpenStack foundation moving to expand its scope as an Open Infrastructure community with additional focus areas? It’s simple: the community is asking them to do it.

Within the vast space of infrastructure automation, there are clusters of aligned interest. These clusters are sufficiently narrow that they can collaborate on shared technologies and practices. They also have an partial overlap (Venn) with adjacencies where OpenStack is already present. There is a strong economic and social drive for members in these overlapped communities to bridge together instead of creating new disparate groups. Having the OpenStack foundation organize these efforts is a natural and expected function.

The danger of this expansion comes from also carrying the expectation that the technology (code) will also be carried into the adjacencies. That’s my my exact rationale the original VM IaaS needs to be smaller. The wealth of non-core projects crosses clusters of interests. Instead of allowing these clusters to optimize their needs around shared interests, the users get the impression that they must broadly adopt unneeded or poorly fit components. The idea of “competitive” projects should be reframed because they may overlap in function but not ui use-case fit.

It’s long past time to give up expectations that OpenStack is a “one-stop-shop” of infrastructure automation. In my opinion, it undermines the community mission by excluding adjacencies.

I believe that OpenStack must work to embrace its role as an open infrastructure community; however, it must also do the hard work to create welcoming space for adjacencies. These adjacencies will compete with existing projects currently under the OpenStack code tent. The community needs to embrace that the hard work done so far may simply be sunk cost for new use cases.

It’s the OpenStack community and the experience, not the code, that creates long term value.

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, SRE, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

Items of the Week

Digital Rebar

Digital Rebar Releases V3.2 – Stage Workflow

In v3.2, Digital Rebar continues to refine the groundbreaking provisioning workflow introduced in v3.1. Updates to the workflow make it easier to consume by external systems like Terraform. We’ve also improved the consistency and performance of both the content and service.

The release of workflow and the addition of inventory means that Digital Rebar v3 effectively replaces all key functions of v2 with a significantly smaller footprint, minimal learning curve and improved performance. One v2 major feature, multi-node coordination, is not on any roadmap for v3 because we believe those use case are well serviced by upstack integrations like Terraform and Ansible. Full Post

RackN

Joining this week’s L8ist Sh9y Podcast is Zach Smith, CEO of Packet and long-time champion of bare metal hardware. Rob Hirschfeld and Zach discuss the trends in bare metal, the impact of AWS changing the way developers view infrastructure, and issues between networking and server groups in IT organizations. (Blog with Topics and Times)

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

Kubespray (formerly Kargo) – is a project under Kubernetes community umbrella. From the technical side, it is a set of tools, that bring the possibility to deploy production-ready Kubernetes cluster easily.

Kubespray supports multiple Linux distributions to host the Kubernetes clusters (including Ubuntu, Debian, CentOS/RHEL and Container Linux by CoreOS), multiple cloud providers to be used as an underlay for the cluster deployment (AWS, DigitalOcean, GCE, Azure and OpenStack), together with the ability to use Bare Metal installations. It may consume Docker and rkt as the container runtimes for the containerized workloads, as well as a wide variety of networking plugins (Flannel, Weave, Calico and Canal); or built-in cloud provider networking instead.

In this talk we will describe the options of using Kubespray for building Kubernetes environments on OpenStack and how can you benefit from it.

What can I expect to learn?

Active Kubernetes community members, Ihor Dvoretskyi and Rob Hirschfeld, will highlight the benefits of running Kubernetes on top of OpenStack, and will describe how Kubespray may simplify the cluster building and management options for these use-cases.

Ihor is a Developer Advocate at Cloud Native Computing Foundation (CNCF), focused on the upstream Kubernetes-related efforts. He acts as a Product Manager at Kubernetes community, leading Product Management Special Interest Group with the goals of growing Kubernetes as a #1 open source container orchestration platform.

Rob Hirschfeld

Rob Hirschfeld has been involved in OpenStack since the earliest days with a focus on ops and building the infrastructure that powers cloud and storage. He’s also co-Chair of the Kubernetes Cluster Ops SIG and a four term OpenStack board member.