Blog Menu

CoreOS is devoted to making Kubernetes continuously better and easier, from installation to the long-term lifecycle management required for critical infrastructure. Working closely with other CNCF Kubernetes maintainers and the wider community, CoreOS is active throughout the Kubernetes code base. We've helped make the node container runtime interface modular, and we endlessly refine the central etcd cluster configuration store. Today, we're previewing work that brings Kubernetes to a classic maturity watermark for systems software: "self-hosting".

What’s self-hosting?

This term is applied to several categories of systems software, and to compilers and operating systems especially. “Self-hosted” refers to the ability of a system to be expressed in terms of itself. For a compiler, this means building the compiler from source using the compiler itself. For an operating system, self-hosting implies that the install tools and upgrade facilities use that operating system running on that machine to implement their functions – that the system does not require a supervisory host to bootstrap it into life. This "hosted" model is seen in historical systems, in many systems during their early development, and in minimalist embedded systems to this day. Self-hosting solves chicken-and-egg problems in software.

Self-hosted Kubernetes

In this presentation, CoreOS Kubernetes team lead Aaron Levy outlines what we’ve implemented to make Kubernetes self-hosted, why we did it, how we did it, and demonstrates with detailed examples how self-hosted Kubernetes works to make life easier for container cluster admins using Kubernetes orchestration.

Self-hosted means updating Kubernetes is easy

For Kubernetes, self-hosted means simplifying the process of installing and upgrading Kubernetes clusters by making the entire Kubernetes system run on the cluster as Kubernetes objects. The API server and other control plane components are scheduled, executed, and managed as Kubernetes API objects, like any other cluster work – Kubernetes running on Kubernetes.

Today, installing and managing upgrade lifecycles for Kubernetes clusters requires a different set of skills and tools than actually operating the cluster and running business applications on it. Does it also require different staff? Ideally, mastering kubectl and other tools to operate Kubernetes should translate into the know-how to install Kubernetes in the first place, and to keep it running over time.

Self-hosted means high availability is easy

Once Kubernetes is executed as a set of Kubernetes objects and managed by Kubernetes tools, intelligently coordinating upgrades and the complete cluster orchestrator lifecycle is identical to managing any other object in the cluster. Scaling control plane components for high availability becomes just another kubectl scale … command, and upgrades of the cluster manager can be kicked off like, and executed as predictably and reliably as, any other Kubernetes application rolling update.

We think that the self-hosted model provides a number of advantages over the alternatives, and is an applicable deployment model for all Kubernetes Operating Systems. Across many users and Kubernetes Special Interest Groups (SIGs), CoreOS has seen an overwhelming demand for three required management features: