The CoreOS Update Philosophy

CoreOS automates software updates to ensure better security and reliability of machines and containers running in large-scale clusters. Operating system updates and security patches are regularly pushed to CoreOS Container Linux machines without requiring intervention by administrators. When applications are distributed, these automatic updates dramatically improve security without causing service downtime.

Updates are key to security

The ability to easily and atomically update software is the most effective way to improve server security.

Existing tools are hard to manage at scale

Running large clusters of servers is too hard. Each step in the lifecycle of a Container Linux machine has been optimized for simplicity, consistency and security.

Separate concerns between the OS vs App

Today’s VM-focused workflow ties the OS directly to the apps on the box. Moving dependencies out of the OS and into a container dramatically reduces complexity.

The isolation of all application code and dependencies in containers means these frequent operating system updates can deliver the latest features and security fixes without risk to the apps running above. The decoupling of the application from the system and library dependencies layer is the force driving containers in the enterprise. Container Linux applies these lessons to the container support layer, the operating system, minimizing it and formalizing the semantics of updates.

Collapse

Redefining the Linux Distro

The traditional Linux distribution is a one-size-fits-all, general purpose tool. It bundles a large amount of unused software, which adds bloat, increases the security threat surface, and expands the testing matrix required to certify a new release. Microservices require fewer dependencies, and using a more minimal operating system enables your apps to reach hyperscale.

Container Linux contracts the boundary of the distribution to include just the essentials: the operating system and basic userland utilities are stripped to their bare minimum and shipped as an integral unit. All other applications and dependencies run inside containers, where they can be consistently managed, updated and distributed. As a user of Container Linux, you have a consistent, secure base to run your applications. CoreOS engineers continuously deliver patches to the OS, keep the container engines up to date and ensure your containers run securely.

Containers

Containers are key to the modern datacenter. For developers, it has never been easier to ship new application versions. Containers easily plug into your CI/CD pipeline for automated build, test and deployment with an audit trail.

Hybrid infrastructure requires containers – containers are the consistent, portable object that can be safely transported between environments. The tools to accomplish this are open source and standards driven, giving you a truly vendor-neutral solution.

Agility

Portability

Security

The ability to easily and atomically update software is the only way to improve internet security.

Containers turn apps into integral units that can migrate easily between machines and between providers.

Today’s VM-focused workflow ties the OS directly to the apps on the box. Moving dependencies out of the OS and into a container dramatically reduces complexity.

Agility

The ability to easily and atomically update software is the only way to improve internet security.

Portability

Containers turn apps into integral units that can migrate easily between machines and between providers.

Security

Today’s VM-focused workflow ties the OS directly to the apps on the box. Moving dependencies out of the OS and into a container dramatically reduces complexity.

The container engines Docker and rkt are configured out of the box, ready to run your applications. Through the continuous stream of updates, Docker and rkt are automatically and continuously updated with the operating system.

Learn More

Leading Industry Standards

Containers vs. VMs

Collapse

Leading Industry Standards

CoreOS has been leading the industry towards container standards since 2013. CoreOS founded appc, the precursor specification to the newly formed Open Container Initiative (OCI), a industry collaboration working to ensure container portability. CoreOS CTO Brandon Philips leads the technical advisory board of the OCI.

CoreOS provides leadership to the container ecosystem:

The most widely used operating system for deploying containers

The most advanced container registry, Quay, with industry-first security scanning

Containers vs. VMs

The desire for self-service resource management drove the success of virtual machines in the enterprise. In the era of cloud-native applications, deployment of software happens across multiple machines and cloud environments, which is difficult to accomplish with a VM-based workflow. Containers fulfill the self-service need of developers, plus enable better security, deployment and operations capabilities.

From a developer’s perspective, building a container is similar to constructing a deb or rpm package. An immutable deployment artifact is produced from a continuous delivery system. Like a VM, the application code and dependencies are packaged within a container image. Unlike a VM, a container behaves just like a normal Linux process – it starts in milliseconds, outputs logs, and appears in the process tree.

From an operations perspective, containers allow a machine to be carved up into smaller chunks, leading to increased utilization. Unlike VMs, which are typically sized in the GBs, a container can request very fine-grained amount of resources. These limits are enforced through the Linux kernel, which eliminates the need for a hypervisor layer. Isolating your applications into smaller, more focused microservices provides an increase in security over VMs, which when hacked, are entirely compromised. With containers, the kernel limits the attacker’s access to the compromised application only, instead of a scenario where a Memcache bug leads to unfettered access to your MongoDB database.

Will VMs completely disappear? No. But, most microservices will migrate to containers due to increased productivity for development and operations teams.

Collapse

A Smart Datacenter Powered by #GIFEE

The transition from monolithic application deployments to flexible, cloud-native microservices requires more agile and dynamic computing infrastructure to keep pace with your business. Until now, these tools were only available to web giants like Facebook and Google. CoreOS brings this power to all businesses through the philosophy of #GIFEE:

100% Open Source Components

CoreOS is committed to open source by leading development of popular tools for containers. The #GIFEE stack is no exception, building on top of CoreOS Container Linux, cluster tools from CoreOS, and Kubernetes.

etcd provides cluster consensus and is the backing datastore of Kubernetes clusters.

Rkt container engine provide a secure, composable way to run containers with Kubernetes.

Now part of the Canal project, flannel provides a software defined networking overlay for Kubernetes clusters.

Kubernetes is inspired by Google’s internal infrastructure, which has understood the benefits of containers for over a decade. CoreOS contributes heavily to Kubernetes, most notably in scalability, auth/permissions features, and the installation experience.

etcd provides cluster consensus and is the backing datastore of kubernetes clusters.

Rkt container engine provide a secure, composable way to run containers with Kubernetes.

Now part of the Canal project, flannel provides a software defined networking overlay for Kubernetes clusters.

Kubernetes is inspired by Google’s internal infrastructure, which has understood the benefits of containers for over a decade. CoreOS contributes heavily to Kubernetes, most notably in scalability, auth/permissions features, and the installation experience.

Collapse

Speed up with Kubernetes

With Kubernetes, development teams get a consistent set of tools to construct scalable services out of containers. Service discovery, load balancing, and self-healing are provided by the cluster, saving each team from re-inventing their own solution.

With #GIFEE, your infrastructure is highly available, scales easily, and runs in the cloud or on-premises.