Blog

Prometheus is one of the standard-bearing open-source solutions for monitoring and observability. From its humble origins at SoundCloud in 2012, Prometheus quickly garnered widespread adoption and later became one of the first CNCF projects and just the second to graduate (after Kubernetes). It’s used by tons of forward-thinking companies in production, including heavyweights like DigitalOcean, Fastly, and Weaveworks, and has its own dedicated yearly conference, PromCon.

Prometheus: powerful but intentionally limited

Prometheus has succeeded in part because the core Prometheus server and its various complements, such as Alertmanager, Grafana, and the exporter ecosystem, form a compelling end-to-end solution to a crucial but difficult problem. Prometheus does not, however, provide some of the capabilities that you’d expect from a full-fledged “as-a-Service” platform, such as multi-tenancy, authentication and authorization, and built-in long-term storage.

Cortex, which joined the CNCF in September as a sandbox project, is an open-source Prometheus-as-a-Service platform that seeks to fill those gaps and to thereby provide a complete, secure, multi-tenant Prometheus experience. I’ll say a lot about Cortex further down; first, let’s take a brief excursion into the more familiar world of Prometheus to get our bearings.

Why Prometheus?

As a CNCF developer advocate, I’ve had the opportunity to become closely acquainted both with the Prometheus community and with Prometheus as a tool (mostly working on docs and the Prometheus Playground). Its great success is really no surprise to me for a variety of reasons:

Prometheus offers a simple and easily adoptable metrics exposition format that makes it easy to write your own metrics exporters. This format is even being turned into an open standard via the OpenMetrics project (which also recently joined the CNCF sandbox).

Prometheus offers a simple but powerful label-based querying language, PromQL, for working with time series data. I find PromQL to be highly intuitive

Why Prometheus-as-a-Service?

Early on, Prometheus’ core engineers made the wise decision to keep Prometheus lean and composable. From the get-go, Prometheus was designed to do a small set of things very well and to work seamlessly in conjunction with other, optional components (rather than overburdening Prometheus with an ever-growing array of hard-coded features and integrations). Here are some things that Prometheus was not meant to provide:

Long-term storage — Individual Prometheus instances provide durable storage of time series data, but they do not act as a distributed data storage system with features like cross-node replication and automatic repair. This means that durability guarantees are restricted to that of a single machine. Fortunately, Prometheus offers a remote write API that can be used to pipe time series data to other systems.

A global view of data — As described in the bullet point above, Prometheus instances act as isolated data storage units. Prometheus instances can be federated but that adds a lot of complexity to a Prometheus setup and again, Prometheus simply wasn’t built as a distributed database. This means that there’s no simple path to achieving a single, consistent, “global” view of your time series data.

Multi-tenancy — Prometheus by itself has no built-in concept of a tenant. This means that it can’t provide any sort of fine-grained control over things like tenant-specific data access and resource usage quotas.

Why Cortex?

As a Prometheus-as-a-Service platform, Cortex fills in all of these crucial gaps with aplomb and thus provides a complete out-of-the-box solution for even the most demanding monitoring and observability use cases.

It offers a global view of Prometheus time series data that includes data in long-term storage, greatly expanding the usefulness of PromQL for analytical purposes.

It has multi-tenancy built into its very core. All Prometheus metrics that pass through Cortex are associated with a specific tenant.

The architecture of Cortex

Cortex has a fundamentally service-based design, with its essential functions split up into single-purpose components that can be independently scaled:

Distributor — Handles time series data written to Cortex by Prometheus instances using Prometheus’ remote write API. Incoming data is automatically replicated and sharded, and sent to multiple Cortex ingesters in parallel.

Ingester — Receives time series data from distributor nodes and then writes that data to long-term storage backends, compressing data into Prometheus chunks for efficiency.

Querier — Handles PromQL queries from clients (including Grafana dashboards), abstracting over both ephemeral time series data and samples in long-term storage.

Each of these components can be managed independently, which is key to Cortex’s scalability and operations story. You can see a basic diagram of Cortex and the systems it interacts with below:

As the diagram shows, Cortex “completes” the Prometheus Monitoring System. To adapt it to existing Prometheus installations, you just need to re-configure your Prometheus instances to remote write to your Cortex cluster and Cortex handles the rest.

Multi-tenancy

Single-tenant systems tend to be fine for small use cases and non-production environments, but for large organizations with a plethora of teams, use cases, and environments, those systems become untenable (no pun intended). To meet the exacting requirements of such large organizations, Cortex provides multi-tenancy not as an add-on or a plugin but rather as a first-class capability.

Multi-tenancy is woven into the very fabric of Cortex. All time series data that arrives in Cortex from Prometheus instances is marked as belonging to a specific tenant in the request metadata. From there, that data can only be queried by the same tenant. Alerting is multi-tenant as well, with each tenant able to configure its own alerts using Alertmanager configuration.

In essence, each tenant has its own “view” of the system, its own Prometheus-centric world at its disposal. And if you do use Cortex in a single-tenant fashion, you can expand out to an indefinitely large pool of tenants at any time.

Use cases

Several years into its development, users of Cortex have tended to cluster into two broad categories:

Service providers building hosted, managed platforms offering a monitoring and observability component. If you were building a Platform-as-a-Service offering like Heroku or Google App Engine, for example, Cortex would enable you to provide each application running on the platform with the full spectrum of capabilities provided by Prometheus and to treat each application (or perhaps each account or customer) as a separate tenant of the system.Weave Cloud and Grafana Labs are examples of comprehensive cloud platforms that use Cortex to enable customers to use Prometheus to the fullest.

Enterprises with many internal customers running their own apps, services, and “stacks.”EA and StorageOS are examples of large enterprises that have benefited from Cortex.

Cortex, the Prometheus ecosystem, and the CNCF

Cortex has some highly compelling technological bona fides, but under the current industry Zeitgeist I also think it’s important to point out its open source bona fides as well:

Cortex is already running in production powering Weave Cloud and Grafana Cloud, two cloud offerings (and core contributors) whose success is crucially dependent on the future trajectory of Cortex.

With the addition of Cortex to the CNCF sandbox, there are now three Prometheus-related projects under the CNCF umbrella (including Prometheus itself and OpenMetrics). We know that monitoring and observability are essential components of the cloud native paradigm, and we’re happy to see continued convergence around some of the core primitives that have organically emerged from the Prometheus community. The Cortex project is energetically carrying this work forward and I’m excited the Prometheus-as-a-Service offshoot of the Prometheus ecosystem take shape.

With KubeCon Seattle now behind us, here’s a snapshot of all the cloud native goodness at our most jam-packed show to date. The sold-out KubeCon + CloudNativeCon North America 2018had the largest attendance and waiting list of any past CNCF event with more than 8,000 contributors, end users, vendors and developers from around the world gathering for over three days in Seattle, Washington to further the education and adoption of cloud native computing, and share insights around this fast growing ecosystem.

With 8,000 attendees in-person and another 2,000 on the waitlist experiencing major FOMO as they watched the live stream keynotes and read their Twitter feed, KubeCon Seattle attendance was a 83% increase from last year’s KubeCon event in Austin. And while the attendee numbers grew, the great “developer conference” experience remained the same!

During the three days of keynotes, women were front and center as we heard from KubeCon Co-Chair Liz Rice who gave a CNCF community update, alongside a number of our project maintainers — including a Helm update from Michelle Noorali of Microsoft; an Envoy update from Matt Klein of Lyft; and an overview of Kubernetes growth from Aparna Sinha of Google.

With 40% of all keynotes coming from women, the ladies of cloud native were running the stage! KubeCon Co-Chair Janet Kuo of Google explained by Kubernetes being “boring” was a good thing and Liz Rice put her Aqua Security role hat on for another keynote emphasizing the importance of security, saying, “CNCF is not here to throw glitzy events, but to help us coordinate as a community and ensure we have proper governance in place and make it harder to hand privileges to some random dude, and that is important as more and more companies rely on open source technologies. Good governance is how as a community we can save ourselves from a security attack.” To cap off all the keynotes, Kelsey Hightower gave a shout out to the amazing real women of hidden figures, his mom and the queen of Motown Diana Ross in his Serverless keynote.

the @CloudNativeFdn is crushing it with diverse representation on stage today #kubecon. At a glance, it looks like cloud native is being driven by women, with a few token men to keep things honest. so many great women leaders in the K8s ecosystem.

Amazing women in technology leadership roles were also seen in important “How do you make this work in the enterprise” keynotes and sessions. Airbnb software engineer Melanie Cebula identified key problems that make out-of-the-box Kubernetes less friendly to developers. She also laid out 10 strategies for addressing these issues based on Airbnb’s experience empowering one thousand engineers to develop hundreds of Kubernetes services at scale.

Uber’s Celina Ward and Matt Schallert, who shared their experience creating an operator for a unique stateful workload. They discussed the major shift in thought and provided the audience with a framework for expressing their stateful workloads using Kubernetes primitives, and advice for navigating the difficult process of codifying innovative abstract ideas without over-engineering solutions.

While having so many women speakers at KubeCon Seattle was a giant step forward, there were a number of activities that brought together the diversity of the cloud native community; including speed networking and mentoring, diversity lunch, sessions on building a community through Meetups and KubeCon attendee scholarships.

CNCF’s diversity program offered scholarships to 147 recipients, from traditionally underrepresented and/or marginalized groups, to attend KubeCon Seattle! The $300,000 investment for Seattle — the most ever invested by a conference for diversity — was donated in majority by CNCF, along with contributions from scholarship sponsors Aspen Mesh, MongoDB, Twistlock, Two Sigma and VMware. Including Seattle, CNCF has offered more than 485 diversity scholarships to attend KubeCons in the past 2 years.

CNCF also collaborates with the Kubernetes mentoring program to offer networking opportunities for mentees at KubeCons. 66 mentors and 180+ mentees participating in this program during KubeCon Seattle.

For the third year in a row, the CNCF Community Awards, sponsored by VMware, highlighted the most active ambassador and top contributor across all CNCF projects.

Top Cloud Native Committer – an individual with the incredible technical skills and notable technical achievements in one or multiple CNCF projects. The 2018 recipient was Jordan Liggitt.

Top Cloud Native Ambassador – an individual with the incredible community-oriented skills, focused on spreading the word and sharing the knowledge with the entire Cloud Native community or within a specific project. The 2018 recipient was Michael Hausenblas.

Jordan is one of the hardest working engineers I’ve ever met — and takes the time to mentor and support everyone he works with. Extremely well deserved. #KubeConhttps://t.co/hWyUHKpdfw

27 co-located events occured on Day 0 (December 10th) of the conference. There were a number of great technical and community building sessions; including Linkerd in production 101, Kubernetes contributor summit and the first ever EnvoyCon!

Taking a load off for the first time since 8AM. Phew! What a day! Thank you so, so much to everyone who helped make #envoycon a spectacular success. Every talk was amazing! It blows my mind to see such an engaged community come together like this. Can’t wait for next year! ❤️

In 2016, Deis (now part of Microsoft) platform architect Matt Butcher was looking for a way to explain Kubernetes to technical and non-technical people alike. Inspired by his daughter’s prolific stuffed animal collection, he came up with the idea of “The Illustrated Children’s Guide to Kubernetes.” Thus Phippy, the yellow giraffe and PHP application, along with her friends, were born.

On the keynote stage during Day 1 of the conference, Matt and co-author Karen Chu announced Microsoft’s donation of Phippy to CNCFand presented the official sequel to the Children’s Illustrated Guide to Kubernetes in their live reading of “Phippy Goes to the Zoo: A Kubernetes Story”.

As part of Microsoft’s donation of both books and the characters, CNCF has licensed all of this material under the Creative Commons Attribution License (CC-BY), which means that you can remix, transform, and build upon the material for any purpose, even commercially.

It may be hard to believe with all of our expansive growth – CNCF membership increased 110% this year, adding 169 new members – but CNCF is still young in years. We celebrated our third birthday this week with the cloud native community!

I really want to thank all of the speakers at SEA18 #kubecon. The content is getting more technical, more helpful, and you guys were all so professional! THANK YOU for all of the hard work you put into them. I really enjoy watching this community mature. 😊 cc: @CloudNativeFdn

As keynotes for #KubeCon wrap up, major, major kudos to @CloudNativeFdn and organizers for putting together such a diverse, insightful, and inspiring set of speakers. THIS is how it’s done. (seeing a diversity photo being set up now is just icing on cake).

KubeCon + CloudNativeCon China 2019, scheduled for June 25-26 at the Shanghai Convention & Exhibition Center of International Sourcing, Shanghai, China. CFPs will open later this month and close February 1, 2019.

In 2016, Deis (now part of Microsoft) platform architect Matt Butcher was looking for a way to explain Kubernetes to technical and non-technical people alike. Inspired by his daughter’s prolific stuffed animal collection, he came up with the idea of “The Illustrated Children’s Guide to Kubernetes.” Thus Phippy, the yellow giraffe and PHP application, along with her friends, were born.

Today we’re excited to welcome Phippy and the cast of snuggly, cloud native characters into CNCF. As Kubernetes continues to see unprecedented momentum, her story offers developers an easy way to explain their work to parents, friends, and children.

Today, live from the keynote stage at KubeCon + CloudNativeCon North America, Matt and co-author Karen Chu announced Microsoft’s donation and presented the official sequel to the Children’s Illustrated Guide to Kubernetes in their live reading of “Phippy Goes to the Zoo: A Kubernetes Story” – the tale of Phippy and her niece as they take an educational trip to the Kubernetes Zoo.

As part of Microsoft’s donation of both books and the characters Phippy, Goldie, Captain Kube, and Zee, CNCF has licensed all of this material under the Creative Commons Attribution License (CC-BY), which means that you can remix, transform, and build upon the material for any purpose, even commercially. If you use the characters, please include the text “phippy.io” to provide attribution (and online, please include a link to https://phippy.io).The characters were created by Matt Butcher, Karen Chu, and Bailey Beougher. Goldie is based on the Go Gopher, created by Renee French, which is also licensed under CC-BY. Images of the characters are available in the CNCF artwork repo in svg, png, and ai formats and in color, black, and white.

Now that Phippy and her cloud native friends have made CNCF their home, make sure to keep an eye out for the fun adventures the characters will find themselves on as the Kubernetes global community continues to grow!

etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines with best-of-class stability, reliability, scalability, and performance. The project – frequently teamed with applications such as Kubernetes, M3, Vitess, and Doorman – handles leader elections during network partitions and will tolerate machine failure, including the leader.

“etcd acts as a source of truth for systems like Kubernetes,” said Brian Grant, TOC representative and project sponsor, Principal Engineer at Google, and Kubernetes SIG Architecture Co-Chair and Steering Committee member. “As a critical component of every cluster, having a reliable way to automate its configuration and management is essential. etcd offers the necessary coordination mechanisms for cloud native distributed systems, and is cloud native itself.”

All Kubernetes clusters use etcd as their primary data store. As such, it handles storing and replicating data for Kubernetes cluster state and uses the Raft consensus algorithm to recover from hardware failure and network partitions. In addition to Kubernetes, Cloud Foundry also uses etcd as their distributed key-value store. This means etcd is used in production by companies such as Ancestry, ING, Pearson, Pinterest, The New York Times, Nordstrom, and many more.

“Alibaba uses etcd for several critical infrastructure systems, given its superior capabilities in providing high availability and data reliability,” said Xiang Li, senior staff engineer, Alibaba. “As a maintainer of etcd we see the next phase for etcd to focus on usability and performance. Alibaba looks forward to continuing co-leading the development of etcd and making etcd easier to use and more performant.”

“AWS is proud to have dedicated maintainers of etcd on our team to help ensure the bright future ahead for etcd. We look forward to continuing work alongside the community to continue the project’s stability,” said Deepak Singh, Director of Container Services at AWS.

“The Certificate Transparency team at Google works on both implementations and standards to fundamentally improve the security of internet encryption. The Open Source Trillian project is used by many organizations as part of that effort to detect the mis-issuance of trusted TLS certificates on the open internet”, says Al Cutter, the team lead, “And, etcd continues to play a role in the project by safely storing API quota data to protect Trillian instances from abusive requests, and reliably coordinating critical operations.”

Written in Go, etcd has unrivaled cross-platform support, small binaries, and a thriving contributor community. It also integrates with existing cloud native tooling like Prometheus monitoring, which can track important metrics like latency from the etcd leader and provide alerting and dashboards.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF. “etcd is a fantastic addition to our community of projects.”

“When we introduced etcd during the early days of CoreOS, we wanted it to be this ubiquitously available component of a larger system. Part of the way you get ubiquity is to get everyone using it, and etcd hit critical mass with Kubernetes and has extended to many other projects and users since. As etcd goes into the CNCF, maintainers from Amazon, Alibaba, Google Cloud, and Red Hat all have nurtured the project as its user base has grown. In fact, etcd is deployed in every major cloud provider now and is a part of products put forward by all these companies and across the cloud native ecosystem,” said Brandon Philips, CTO of CoreOS at Red Hat. “Having a neutral third party stewarding the copyrights, DNS and other project infrastructure is the reasonable next step for the etcd project and users.”

Other common use cases of etcd include storing important application configuration like database connection details or feature flags as key value pairs. These values can be watched, allowing the application to reconfigure itself when changed. Advanced uses take advantage of the consistency guarantees to implement database leader elections or do distributed locking across a cluster of workers.

Main etcd Features:

Easily manages cluster coordination and state management across any distributed system

Written in Go and uses the Raft consensus algorithm to manage a highly available replicated log

Handles leader elections during network partitions and will tolerate machine failure, including the leader

Supports dynamic cluster membership reconfiguration

Offers stable read/write under high load

Includes a multi-version concurrency control data model

Empowers reliable key monitoring, which never silently drop events

Lease primitives for expiring keys

Notable Milestones:

469 contributors

21,627 GitHub stars

157 releases

14,825 commits

4,310 forks

9 maintainers representing 8 companies

As a CNCF hosted project, joining Incubated technologies like OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor, etcd is part of a neutral foundation aligned with its technical interests, as well as the larger Linux Foundation, which provide governance, marketing support, and community outreach.

Every CNCF project has an associated maturity level: sandbox, incubating, or graduated project. For more information on what qualifies a technology for each level, please visit the CNCF Graduation Criteria v.1.1.

Now that we’ve finally caught our breath after a fantastic two days at the KubeCon + CloudNativeCon in Shanghai, let’s dive into some of the key highlights and news. The best part is we get to see so many of you so soon again at KubeCon + CloudNativeCon Seattle in December!

The sold-out event with more than 2,500 attendees (technologists, maintainers and end users of CNCF’s hosted projects) was full of great keynotes, presentations, discussions and deep dives on projects including Rook, Jaeger, Kubernetes, gRPC, containerd – and many more! Attendees had the opportunity to hear a slew of compelling talks from CNCF project maintainers, community members and end users including eBay, JD.com and Alibaba.

After hosting successful events in Europe and North America the past few years, it’s no wonder China was the next stop on the tour. Asia has seen a spike in of cloud native adoption to the tune of 135% since March 2018. You can find more information about the tremendous growth of cloud usage in China from CNCF’s Survey.

The conference kicked off by welcoming 53 new global members and end users to the Foundation, including several from China such as R&D China Information Technology, Shanghai Qiniu Information Technologies, Beijing Yan Rong Technology and Yuan Ding Technology. The CNCF has 40 members in China, which represents a little more than 10% of the CNCF’s total membership.

During the conference there were many great sessions on AI as well as a Kubernetes AI panel discussion that included product managers, data scientists, engineers, and architects from Google, CaiCloud, eBay, and JD.com

In further exciting China news, Harbor, the first project contributed by VMware to CNCF, successfully moved from sandbox into incubation. “Harbor is not only the first project donated by VMware to CNCF, but it is also the first Chinese program developed in the Chinese open source community donated to CNCF.”

Missed out on KubeCon + CloudNativeCon Shanghai? Don’t worry as you have more chances to attend. KubeCon + CloudNativeCon North America 2018 is taking place in Seattle, WA from December 10-13. The Conference is currently sold out, but if you’d like to be added to the waitlist, fill out this form. You will be notified as new spots become available.

Also, massive kudos to the translators at #KubeCon + #CloudNativeCon who have been ON POINT. The translator for the Service Mesh panel was capturing and relaying a boatload of technical details in a staggeringly fast amount of time.

What’s in a function?

Creating serverless applications is a multi-step process. One of the critical steps in this process is packaging the serverless functions you want to deploy into your FaaS (Function as a Service) platform of choice.

Before a function can be deployed it needs two types of dependencies: direct function dependencies and runtime dependencies. Let’s examine these two types.

Direct function dependencies – These are objects that are part of the function process itself and include:

The function source code or binary

Third party binaries and libraries the function uses

Application data files that the function directly requires

Runtime function dependencies – This is data related to the runtime aspects of your function. It is not directly required by the function but configures or references the external environment in which the function will run. For example:

Event message structure and routing setup

Environment variables

Runtime binaries such as OS-level libraries

External services such as databases, etc.

For a function service to run, all dependencies, direct and runtime, need to be packaged and uploaded to the serverless platform. Today, however, there is no common spec for packaging functions. Serverless package formats are vendor-specific and are highly dependent on the type of environment in which your functions are going to run. This means that, for the most part, your serverless applications are locked-down to a single provider, even if your function code itself abstracts provider-specific details.

This article explores the opportunity to create a spec for an open and extensibile “Function Package” that enables the deployment of a serverless function binary, along with some extra metadata, across different FaaS vendors.

Function runtime environments

When looking at today’s common runtime FaaS providers, we see two mainstream approaches for running functions: container function and custom function runtimes. Let’s have a closer look at each type.

Container function runtimes

This method uses a container-based runtime, such as Kubernetes, and is the common runtime method for on-prem FaaS solutions. As its name suggests, it usually exposes container runtime semantics to users at one level or another. Here’s how it works.

A container entry-point is used as a function, together with an eventing layer, to invoke a specific container with the right set of arguments. The function is created with docker buildor any other image generator to create an OCI container image as the final package format.

The main issue with container functions is that developing a function often requires understanding of the runtime environment; developers need to weave their function into the container. Sometimes this process is hidden by the framework, but it is still very common for the developer to need to write a Dockerfile for finer control over operating system-level services and image structure. This leads to high coupling of the function with the runtime environment.

Custom function runtimes

Custom function runtime are commonly offered by cloud providers. They offer a “clean” model where functions are created by simply writing handler callbacks in your favorite programming language. In contrast to container-based functions, runtime details are left entirely to the cloud provider (even in cases where, behind-the-scenes, the runtime is based on containers).

Custom runtime environments use a language-centric model for functions. Software languages already have healthy packaging practices (such as Java jars, npm packages or Go modules). Still, no common packaging format exists for their equivalent functions. This creates a cloud-vendor lock-in. Some efforts have been made to define a common deployment model, (such as AWS Serverless Application Model (SAM) and the open-source Serverless Framework), but these models assume a custom binary package already exists or they may include the process of building it to each cloud provider standards.

The need for a Function Package

To be able to use functions in production, we need to use stable references to them to enable repeatability in deploying, upgrading and rolling back to a specific function version. This can be achieved with an immutable, versioned, sealed package that contains the function with all its direct dependencies.

Container images may meet these requirements because they offer a universal package format. However, there’s a side effect of “polluting” the function by tightly coupling it with details of the container runtime. Custom runtimes also exhibit coupling with their function packages. While they offer clean functions, they use proprietary package formats that mix the function dependencies with the runtime dependencies.

What we need is a clear separation: a clean function together with its dependencies in a native “Function Package” separate from external definitions of runtime-specific dependencies. This separation would allow us to take a function and reuse it across different serverless platforms, only adding external configuration as needed.

Getting a function to run

Let us reexamine for a moment the steps required to build and run a container-based function. We can look at this as a four step process:

(1) Packaging

Build a container image together with (direct + runtime) function dependencies and entry point definition

(2) Persisting

Push the image to a container registry so that we have a stable reference to it

(3) Installing

Pull the image to the runtime and configure it according to runtime-specific dependencies

(4) Running

Accept function events at the defined entry point

This process works pretty well for container runtimes. We can try to formulate it into a more generalized view:

(1) Packaging

Build a function package that contains its direct dependencies (or references to them) and entry point definition

(2) Persisting

Upload the package to a package registry

(3) Installing

Download the package (and its direct dependencies) to the runtime and configure runtime-specific dependencies

(4) Running

Accept function events at the defined entry point

Creating a function package

This general process can be applied across many serverless providers! Developers only need to worry about creating a clean Function Package and keeping it persistent. Runtime configuration can be provided upon installation and can be vendor-specific.

A Function Package would contain:

Function files – in source code or binary format

Direct dependencies – libraries and data files

Entry point definition

The experience for the developer is simple and programming-focused, and therefore we can create spec “profiles” that map to a specific language type. For example:

Java Profile

Golang Profile

Docker Profile

Function files

JAR file

Go module source files

Build file references to generic files run by the entry-point

Direct dependencies

Generated pom file for the jar with a full flat list of dependencies + data files

go.mod file containing the dependencies + data files

Build file references to base image + other service files and data

Entry point definition

Class reference

“main” package reference

Build file references to image entry point

What about dependencies?

A function package does not necessarily have to physically embed binary dependencies if they can be reliably provided by the runtime prior to installation. Instead, only references to dependencies could be declared. For example, external JAR coordinates declared in a POM file, or go packages declared in a go.modfile. These dependencies would be pulled by the runtime during installation – similar to how a container image is pulled from a docker registry by function runtimes.

By using stable references, we guarantee repeatability and reuse. We also create lighter packages that allow for quicker and more economic installation of functions by dependencies from a registry closer to the runtime, not having to reupload them each time.

Summary

Creating profiles for common runtime languages allows functions to be easily moved across different serverless FaaS providers, adding vendor-specific information pertaining to runtime configuration and eventing only at the installation phase. For application developers, this means they can avoid vendor lock-in and focus on writing their applications’ business logic without having to become familiar with operational aspects. This goal can be achieved more quickly by creating shareable Function Packages as a CNCF Serverless spec. If you are interested in discussing more – let’s talk!

KubeCon + CloudNativeCon has expanded from its start with 500 attendees in 2015 to become one of the largest and most successful open source conferences ever. With that growth comes challenges, and CNCF is eager to evolve the conference over time to best serve the cloud native community. Our upcoming event in Seattle (December 10-13, 2018), our biggest yet, is sold out several weeks ahead of time with 8,000 attendees.

From the start and throughout this growth, we’ve appreciated the feedback and input the community has shared. We carefully review the post-event surveys and listen closely to suggestions and new ideas. This feedback loop is crucial and allows us to iterate and improve.

As we open the call for proposals (CFP) for Barcelona (May 20-23, 2019), we want to share several changes we’re planning to make in 2019, as well as some changes we considered but decided not to implement at this time. CNCF is part of the Linux Foundation (LF) and leverages the LF’s decade of experience running open source events, including more than 100 in 2018 with more than 30,000 attendees from more than 11,000 organizations and 113 countries. We’ve also received a lot of feedback from previous events, much of it laudatory and some with specific proposals for improvement.

Here are some changes we’re planning to implement in 2019:

CFPs will have room for longer submissions to encourage presenters to share additional background and technical information in their proposals.

We will be willing to provide additional feedback to submitters whose talks are not selected. This feedback will fall in a set of categories rather than be personalized.

We have improved our tooling to only accept a single CFP talk from each speaker (or two co-presenter talks), and are limiting submitters to two solo submissions, 4 co-presenter submissions, or a combination.

In the maintainer track offered to CNCF-hosted projects, the Kubernetes SIGs and working groups, and CNCF working groups, we’re offering the opportunity to combine the intro and deep dive sessions into a longer 80-minute session. (Note that maintainer talks do not count against the single CFP talk per speaker quota.)

We are introducing two new kinds of smaller events in 2019. Kubernetes Days will be single day, single track events targeted at regions with large numbers of developers who cannot necessarily travel easily to our premiere events in Europe, China, and North America. Cloud Native Community Days will be regional events run by community members and will provide additional opportunities for speakers, practitioners and end users to come together.

We are encouraging any of our partner summits that would like to try a double-blind talk submission process to do so.

Here are some of the core elements of how we run the event that we are not planning to change:

KubeCon + CloudNativeCon is a conference for developers and end users (broadly defined) to communicate about open source, cloud native technologies.

Talks are rated by a program committee of community leaders and highly-rated speakers from past conferences, organized by conference co-chairs. The program committee is selected by the conference co-chairs, who also select the tracks and make the final talk selections.

Whether a company is a sponsor of the event or a member of CNCF has no impact on whether talks from their developers are selected. The only exception is that each diamond sponsor (6 in total) gets a 5-minute sponsored keynote. The co-chairs are now working with all keynote speakers, including the sponsored ones, to avoid vendor pitches so that the talks resonate with a community audience.

All talks are about using and/or developing open source software. Although many speakers are employed by software vendors, the conference content is focused on working with open source, not vendor offerings.

Talks can discuss one of CNCF’s 19 graduated/incubating projects or 11 sandbox projects or any other open source technology that adds value to the cloud native ecosystem.

We remain committed to increasing the voice of those who have been traditionally underrepresented in tech.

We select community leaders to serve as conference co-chairs and represent the cloud native community. The co-chairs for Barcelona are Janet Kuo of Google and Bryan Liles of Heptio. They are in the process of selecting a program committee of around 80 experts, which includes project maintainers, active community members, and highly-rated presenters from past events. Program committee members register for the topic areas they’re comfortable covering, and CNCF staff randomly assign a subset of relevant talks to each member. We then collate all of the reviews and the conference co-chairs spend a very challenging week assembling a coherent set of topic tracks and keynotes from the highest-rated talks. Here are the scoring guidelines we provide to the program committee. There is not a one-to-one mapping of topic areas to session tracks. We look to the conference co-chairs to craft a program that reflects current trends and interests in the cloud native community.

The above process is used to select the ~180 CFP sessions, which are offered in ~10 rooms. The keynote talks are selected by the conference co-chairs from highly-rated CFP submissions, or in rare cases, by invitation of the co-chairs to specific speakers.

In addition, KubeCon + CloudNativeCon also includes ~90 maintainer sessions spread across ~5 rooms. This is content produced by the maintainers of CNCF-hosted projects to inform users about the projects, add new adopters, and transition some of them from users to contributors. Sessions in the maintainer track are open to each of CNCF’s (29) hosted projects, the Kubernetes SIGs and working groups, and CNCF working groups. Each of these can do one 35-minute Intro and one 35-minute Deep Dive session. New for 2019, we’re offering to schedule these back-to-back to enable one 80-minute session.

Another fast-growing part of KubeCon + CloudNativeCon is the partner summits held the day before the event. This is an opportunity for projects and companies in the cloud native community to engage with KubeCon + CloudNativeCon attendees. For Seattle, there are 27 separate events! They range from community-organized events like the Kubernetes Contributor Summit and EnvoyCon to member-organized summits to open source projects from adjacent communities like networking initiatives FD.io and Tungsten Fabric. The content and pricing of these events are determined by the organization that runs each one.

The Review Process

Submissions for the CFP sessions are selected in a single-blind process. That is, the reviewer can see information on who is proposing the talk but the submitter does not see who reviewed their submission. Some academic conferences have switched to double-blind submissions, where the submitter removes all identifying information from their submission and the reviewers judge it based solely on the quality of the content. The downside is that it would require significantly more detailed submissions.

Submissions for Barcelona consist of a title and up to a 900 character description, which is used in the schedule if the talk is selected. There is an additional Benefits to the Ecosystem section of up to 1,500 characters to make the case for the submission (this is up significantly from the 300 characters allowed in 2018). To support double-blind selection, we would need to require submissions of 9,000 characters (~3 pages) or more, which is typical of academic-style conferences to encourage effective review. We believe this would discourage many of the practitioners and end users of cloud native technologies from submitting, and more talks would come from academics and those with the time and proclivity to make longer submissions.

This has pros and cons, but it would be a very significant change and unprecedented among open source conferences run by the LF. We considered testing a double-blind process with one topic area (such as service mesh) but decided that it would be too big of a change for an unknown improvement. Instead, we are encouraging any of our partner summits that would like to try a double-blind talk submission process to do so. The LF Events staff is happy to work with them to organize such a process for Barcelona or future events, and if the results go well, we might expand to one or more tracks at a future KubeCon + CloudNativeCon.

For Seattle, the acceptance rate was only 13%, which we understand creates a lot of disappointment and frustration when a very good talk is not accepted. For 2019, we will be providing additional feedback to submitters whose talks are not selected. This feedback will fall in a set of categories such as “not in top half of scores submitted” and “highly-rated but a similar talk was accepted instead.”

How to Get Your Talk Accepted

Whether a company is a member or end user supporter of CNCF or is sponsoring the event has no impact on whether talks from their developers will be selected. The only exception is that the 6 diamond sponsors each get a 5-minute sponsored keynote. However, being a community leader does have an impact, as program committee members will often rate talks from the creators or leaders of an open source project more highly.

Avoid the common pitfall of submitting a sales or marketing pitch for your product or service, no matter how compelling it is. Focus on your work with an open source project, whether it is one of the CNCF’s 29 hosted projects or a new project that adds value to the cloud native ecosystem.

KubeCon + CloudNativeCon is fundamentally a community conference focusing on the development and deployment of cloud native open source projects. So, pick your presenter and target audience accordingly. Our participants range from top experts to total beginners, so we explicitly ask what level of technical difficulty your talk is targeted for (beginner, intermediate, advanced, or any) and aim to provide a range.

We often get many submissions covering almost the same concept, so even if there are several great submissions, the co-chairs will probably only pick one. Consider choosing a more unique topic that is relevant, but less likely to be submitted by multiple people.

Our community is particularly interested in end users adopting cloud native technology. End users are companies that use cloud native technologies internally but do not sell any cloud native services externally. End users generally do not have a commercial product on the Cloud Native Landscape, though they may have created an open source project to share their internal technology. For more information, please see the kinds of companies in CNCF’s End User Community. If you don’t work for an end user company, consider co-presenting with an end user who has adopted your technology.

Given that talk recordings are available on YouTube, and there is very limited space on the agenda, avoid submissions that were already presented at a previous KubeCon + CloudNativeCon or any other event. If your submission is similar to a previous talk, please include information on how this version will be different. Make sure your presentation is timely, relevant, and new.

We’ve improved our tooling to only accept a single CFP talk from each speaker, and are limiting submitters to two submissions. Specifically, we’re counting being a co-presenter as 0.5 of a talk, and limiting submissions to 2.0 talks in all. So, at most, you can submit as a co-presenter on 4 talks, a solo presenter on 2 talks, or as a solo presenter on 1 and a co-presenter on 2.

Look through the talks that were selected for Copenhagen and Seattle and notice that most have clear, compelling titles and descriptions. The CFP form has a section for including resources that will help reviewers assess your submission. If you have given a talk before that was recorded, please include a link to it. Blog posts, code repos, and other contributions can also help establish your credentials, especially if this will be your first public talk (and we encourage first-time speakers to apply).

Finally, we are explicitly interested in increasing the voice of those who have been traditionally underrepresented in tech. All submissions are reviewed on merit, but we remain dedicated to having a diverse and inclusive conference and we will continue to actively take this into account when finalizing the list of speakers and the overall schedule. For example, we don’t accept panel proposals where all speakers are men. We also provide diversity scholarships to offset travel costs.

Other Cloud Native Conferences

In 2019, we plan to continue to hold three KubeCon + CloudNativeCon events, in Barcelona (May 20-23, 2019), Shanghai (June 24-26, 2019), and San Diego (November 18-21, 2019). In addition, we support 160 Meetup groups in 38 countries, which have hosted more than 1,600 events and have more than 80,000 members.

New for 2019, we are going to launch two new kinds of smaller events. Kubernetes Days will be single-day, single-track events targeted at regions with large numbers of developers who cannot necessarily travel easily to our premiere events in Europe, China, and North America. The first one will be held in Bengaluru, India on March 23, 2019.

In addition, we are planning to support a set of community-organized events called Cloud Native Community Days. These will be regional events run by community members in those areas and provide additional opportunities for speakers, practitioners and end users to come together. We’ll have more details about these programs early in 2019.

Barcelona and Shanghai Submissions

The CFP for Barcelona is open now and the deadline is January 18, 2019. The deadline for submitting talks to KubeCon + CloudNativeCon Shanghai (June 24-26, 2019) will be February 1, 2019. The submission and selection processes are separate. If you submit the same talk to both and it is accepted for one, it will be rejected from the other, so we encourage you to submit different content to each conference.

Follow-Up

If you have questions on our processes for selecting talks or ideas on how to improve them, or other thoughts on CNCF events, please reach out to me at Dee Kumar <dkumar@linuxfoundation.org> or book a time for us to speak at https://calendly.com/deekumar.

JD.com, China’s largest retailer, has been presented with the Top End User Award by the Cloud Native Computing Foundation (CNCF) for its unique usage of cloud native open source projects. The award was announced at China’s first KubeCon + CloudNativeCon conference hosted by CNCF, which gathered thousands of technologists and end users in Shanghai from November 13-15 to discuss the future of open source technology development.

Providing the ultimate e-commerce experience to customers requires JD to house and process enormous amount of information that must be accessible at incredibly fast speeds. To put it in perspective, five years ago there were only about two billion images in JD’s product databases for customers. Today, there are more than one trillion, and that figure increases by 100 million images each day. This is why JD turned to CNCF’s Kubernetes project in recent years to accommodate its clusters.

JD currently runs the world’s largest Kubernetes cluster in production. The company first rolled out its containerized infrastructure a few years ago and, as the clusters grew, JD was one of the early adopters to shift to Kubernetes. The move, known as JDOS 2.0, marked the beginning of JD’s partnership with CNCF to build stronger collaborative relationships with the industry’s top developers, end users, and vendors. Ultimately, CNCF provided a window for JD to both contribute to and benefit from open source development.

In April, JD became the CNCF’s first platinum end user member, and took a seat on the organization’s governance board in order to help shape the direction of future Foundation initiatives. JD’s overall commitment to open source is highly aligned with its broader Retail as a Service strategy in which the company is empowering other retailers, partners, and industries with a broad range of capabilities in order to increase efficiency, reduce costs, and provide a higher level of customer service.

JD’s Kubernetes clusters support a wide range of workloads and big data and AI-based applications. The platform has boosted collaboration and enhanced productivity by reducing silos between operations and DevOps teams. As a result, JD has contributed code to projects such as Vitess, Prometheus, Kubernetes, CNI (Container Networking Interface), and Helm as part of its collaboration with CNCF.

“One contribution that we are very proud of is Vitess, the CNCF project for scalable MySQL cluster management,” said Haifeng Liu, chief architect, JD.com. “We are not only the largest end user of Vitess, but also a very active and significant contributor. We’re looking forward to working together with CNCF and its members to pave the way for future development of open source technology.”

Vitess allows JD to manage resources much more flexibly and efficiently, reducing operational and maintenance costs, and JD has one of the world’s most complex Vitess deployments. The company is actively collaborating with the CNCF community to add new features such as subquery support and global transactions, setting industry benchmarks.

“JD spearheads the use of cloud native technologies at scale within the APAC market, and is responsible for one of the largest Kubernetes deployments in the world,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “The company also makes significant contributions to CNCF projects and its involvement in the community made JD a natural fit for this award.”

JD will continue to work on contributions to cloud native technologies as well as release its own internal and homegrown open source projects to empower others in the community.

Harbor started in 2014 as a humble internal project meant to address a simple use case: storing images for developers leveraging containers. The cloud native landscape was wildly different and tools like Kubernetes were just starting to see the light of the day. It took a few years for Harbor to mature to the point of being open sourced in 2016, but the project was a breath of fresh air for individuals and organizations attempting to find a solid container registry solution. We were confident Harbor was addressing critical use cases based on its strong growth in user base early on.

We were incredibly excited when Harbor was accepted to the Cloud Native Sandbox in the summer of 2018. Although Harbor had been open sourced for some years by this point, having a vendor-neutral home immediately impacted the project resulting in increased engagement via our community channels and GitHub activity.

There were many things we immediately began tackling after joining the Sandbox, including addressing some technical debt, laying out a roadmap based solely on community feedback, and expanding the number of contributors to include folks that have consistently worked on improving Harbor from other organizations. We’ve also started a bi-weekly community call where we hear directly from Harbor users on what’s working well and what’s not. Finally, we’ve ratified a project governance model that defines how the project operates at various levels.

Given Harbor’s already-large global user base across organizations small and large, proposing the project mature into the CNCF Incubator was a natural next step. The processes around progressing to Incubation are defined here. In order to be considered, certain growth and maturity characteristics must first be demonstrated by the project:

Production usage: There must be users of the project that have deployed it to production environments and depend on its functionality for their business needs. We’ve worked closely with a number of large organizations leveraging Harbor the last number of years, so: check!

Healthy maintainer team: There must be a healthy number of members on the team that can approve and accept new contributions to the project from the community. We have a number of maintainers that founded the project and continue to work on it full time, in addition to new maintainers joining the party: check!

Healthy flow of contributions: The project must have a continuous and ongoing flow of new features and code being submitted and accepted into the codebase. Harbor released v1.6 in the summer of 2018, and we’re on the verge of releasing v1.7: check!

CNCF’s Technical Oversight Committee (TOC) evaluated the proposal from the Harbor team and concluded that we had met all the required criteria. It is both deeply humbling and an honor to be in the company of other highly-respected incubated projects like gRPC, Fluentd, Envoy, Jaeger, Rook, NATS, and more.

What’s Harbor anyway?

Harbor is an open source cloud native registry that stores, signs, and scans container images for vulnerabilities.

Harbor solves common challenges by delivering trust, compliance, performance, and interoperability. It fills a gap for organizations and applications that cannot use a public or cloud-based registry, or want a consistent experience across clouds.

Harbor addresses the following common use cases:

On-prem container registry – organizations with the desire to host sensitive production images on-premises can do so with Harbor.

Vulnerability scanning – organizations can scan images before they are used in production. Images with failed vulnerability scans can be blocked from being pulled.

Image replication – production images can be replicated to disparate Harbor nodes, providing disaster recovery, load balancing and the ability for organizations to replicate images to different geos to provide a more expedient image pull.

Architecture

The “Harbor stack” is comprised of various 3rd-party components, including nginx, Docker Distribution v2, Redis, and PostgreSQL. Harbor also relies on Clair for vulnerability scanning, and Notary for image signing.

The Harbor components, highlighted in blue, are the heart of Harbor and are responsible for most of the heavy lifting in Harbor:

Core Services provides an API and UI interface. Intercepts docker pushes / pulls to provide role-based access control and also to prevent vulnerables images from being pulled and subsequently used in production (all of this is configurable).

Admin service is being phased out for v1.7, with feature / functionality being merged into the core service.

Job Service is responsible for running background tasks (e.g., replication, one-shot or recurring vulnerability scans, etc.). Jobs are submitted by the core service and run in the job service component.

Currently Harbor is packaged via both docker-compose service definition and a Helm chart.

Community stats and graphs

Harbor has continued an upward trajectory of community growth through 2018. The stats below visualize the consistent growth pre- and post-acceptance into the Cloud Native Sandbox:

Where we are

Harbor is both mature and production-ready. We know of dozens of large organizations leveraging Harbor in production, including at least one serving millions of container images to tens-of-thousands of compute nodes. The various components that comprise Harbor’s overall architecture are battle-tested in real-world deployments.

Harbor is API driven and is being used in custom SaaS and on-prem products by various vendors and companies. It’s easy to integrate Harbor in your environment, whether a customer-facing SaaS or an internal development pipeline.

The Harbor team strives to release quarterly. We’re currently working on our eight major release, v1.7, due out soon. Over the last two releases alone we’ve made marked strides in achieving our long terms goals:

Native support of Helm charts

Initial support for deploying Harbor via Helm chart

Refactoring of our persistence layer, now relying solely on PostgreSQL and Redis – this will help us achieve our high-availability goals

Where we’re going

This is the fun part. 🙂

Harbor is a vibrant community of users – those who use Harbor and publicly share their experiences, the individuals who report and respond to issues, the folks who hang around in our Slack community, and those who spend time on GitHub improving our code and documentation. We’re all incredible fortunate at the rich and exciting ideas that are proposed via GitHub issues on a regular basis.

We’re still working on our v1.8 roadmap, but here are some major features we’re considering and might land at some point in the future (timing to be determined, and contributions are welcome!):

Image proxying and caching – a docker pull would proxy a request to, say, Docker Hub, then scan the image before providing to developer. Alternatively, pre-cache images and block images that do not meet vulnerability requirements.

Please feel free to share your wishlist of features via GitHub; just open an issue and share your thoughts. We keep track of items the community desires and will prioritized based on demand.

How to get involved

Getting involved in Harbor is easy. Step 1: don’t be shy. We’re a friendly bunch of individuals working on an exciting open source project.

The lowest-barrier of entry is joining us on Slack. Ask questions, give feedback, request help, share your ideas on how to improve the project, or just say hello!

We love GitHub issues and pull requests. If you think something can be improved, let us know. If you want to spend a few minutes fixing something yourself – docs, code, error messages, you name it – please feel free to open a PR. We’ve previously discussed how to contribute, so don’t be shy. If you need help with the PR process, the quickest way to get an answer is probably to ping us on Slack.

The bi-annual CNCF survey takes a pulse of the community to better understand the adoption of cloud native technologies. This is the second time CNCF has conducted its cloud native survey in Mandarin to better gauge how Asian companies are adopting open source and cloud native technologies. The previous Mandarin survey was published in March 2018. This post also makes comparisons to the most recent North American / European version of this survey from August 2018.

Key Takeaways

Usage of public and private clouds in Asia has grown 135% since March 2018, while on-premise has dropped 48%.

Usage of nearly all container management tools in Asia has grown, with commercial off-the-shelf solutions up 58% overall, and home-grown solutions up 690%. Kubernetes has grown 11%.

The number of Kubernetes clusters in production is increasing. Organizations in Asia running 1-5 production clusters decreased 37%, while respondents running 11-50 clusters increased 154%.

Use of serverless technology in Asia has spiked 100% with 29% of respondents using installable software and 21% using a hosted platform.

300 people responded to the Chinese version with 83% being from Asia, compared to 187 respondents from our March 2018 survey.

CNCF has a total of 42 members across China, Japan, and South Korea including 4 platinum members: Alibaba Cloud, Fujitsu, Huawei, and JD.com. A number of these members are also end users, including:

Growth of Containers

Container usage is becoming prevalent in all phases of the development cycle. There has been a significant jump in the use of containers for testing, up to 42% from 24% in March 2018 with an additional 27% of respondents citing future plans. There has also been an increase in use of containers for Proof of Concept (14% up from 8%).

As the usage of containers becomes more prevalent across all phases of development, the use of container management tools is growing. Since March 2018, there has been a significant jump in the usage of nearly all container management tools.

Usage of Kubernetes has grown 11% since March 2018. Other tools have also grown:

Amazon ECS: up to 22% from 13%

CAPS: up to 13% from 1%

Cloud Foundry: up to 20% from 1%

Docker Swarm: up to 27% from 16%

Shell Scripts: up to 14% from 5%

There are also two new tools that were not cited in the March 2018 survey. 16% of respondents are using Mesos and an additional 8% are using Nomad for container management.

Commercial off-the-shelf solutions (Kubernetes, Docker Swarm, Mesos, etc.) have grown 58% overall, while home-grown management (Shell Scripts and CAPS) have grown 690%, showing that home-grown solutions are still widely popular in Asia while North American and European markets moved away from those in favor of COTS solutions.

Cloud vs. On-Premise

While on-premise solutions are widely used in the North American and European markets (64%), that number seems to be declining for the Asian market. Only 31% of respondents reported using on-premise solutions in this survey, compared to 60% in March 2018. Cloud usage is growing with 43% of respondents using private clouds (up from 24%) and 51% using public clouds (up from 16%).

Kubernetes

As for where Kubernetes is being run, Alibaba still remains No. 1 with 38% of respondents reporting usage, but is down from 52% in March 2018. Following Alibaba, is Amazon Web Services (AWS) with 24% of respondents citing usage, slightly down from 26%.

The decline of on-premise usage is also evident in these responses, with 24% of respondents reporting that they run Kubernetes on-prem compared to 38% in March 2018. OpenStack usage has also declined significantly, down to 9% from 26% in March 2018.

For organizations running Kubernetes, the number of production clusters is also increasing. Respondents running 1-5 production clusters decreased 37%, while respondents running 11-50 clusters increased 154%. Still, respondents are mostly running 6-10 production containers, with 29% reporting that number.

We also asked respondents about the tools they are using to manage various aspects of their applications:

Packaging

The most popular method of packaging Kubernetes applications is Managed Kubernetes Offerings (37%), followed by Ksonnet (27%) and Helm (24%).

Autoscaling

Respondents are primarily using autoscaling for Task and Queue processing applications (44%) and Java Applications (44%). This is followed by stateless applications (33%) and stateful databases (29%).

The top reasons respondents aren’t using Kubernetes autoscaling capabilities are because they are using a third party autoscaling solution (32%), were not aware these capabilities existed (30%), or have built their own solution to autoscale (26%).

Cloud Native Projects

What are the benefits of cloud native projects in production? Respondents cited the top four reasons as:

Improved Availability (47%)

Improved Scalability (46%)

Cloud Portability (45%)

Improved Developer Productivity (45%)

Compared to the North American and European markets, improved availability and developer productivity are more important in the Asian market, while faster deployment time is less important (only 38% cited this compared to 50% in the English version of this survey).

As for the cloud native projects that are being used in production and evaluated:

Many cloud native projects have grown in production usage since March 2018. Projects with the largest spike in production usage are: gRPC (22% up from 13%), Fluentd (11% up from 7%), Linkerd (11% up from 7%), OpenTracing (27% from 20%).

The number of respondents evaluating cloud native projects also grew with: gRPC (20% up from 11%), OpenTracing (27% up from 18%), and Zipkin (12% up from 9%).

Challenges Ahead

As cloud native technologies continue to be adopted, especially into production, there are still challenges to address. The top challenges respondents are facing are:

Lack of training (46%)

Difficulty in choosing an orchestration solution (30%)

Complexity (28%)

Finding vendor support (28%)

Monitoring (25%)

One interesting note is that many of the challenges have significantly declined since our previous survey in March 2018 as more resources are added to address these concerns. A new challenge that has come up is lack of training. While CNCF has invested heavily in Kubernetes training over the past year, including courses and certification for Kubernetes Administrators and Application Developers, we are still actively working to make translated versions of the courses and certifications available and more easily accessible in Asia. CNCF is also working with a global network of Kubernetes Training Partners to expand these resources, as well as Kubernetes Certified Service Providers to help support organizations with the complexity of embarking on their cloud native journey.

Serverless

The use of serverless technology has spiked with 50% of organizations using the technology compared to 25% in March 2018. Of that 50%, 29% are using installable software and 21% are using a hosted platform. An additional 17% plan to use the technology within the next 12-18 months.

For installable serverless platforms, Apache OpenWhisk is the most popular with 11% of respondents citing usage. This is followed by Dispatch (6%), FN (5%) and OpenFaaS, Kubeless, and Fission tied at 4%.

For hosted serverless platforms, AWS Lambda is the most popular with 11% of respondents citing usage. This is followed by Azure Functions (8%), and Alibaba Cloud Compute Functions, Google Cloud Functions, and Cloudflare Functions tied at 7%.

Serverless usage in Asia is higher than what we saw in North American and European markets where 38% of organizations were using serverless technology. Hosted platforms (32%) were also much more popular compared to installable software (6%), whereas in Asia both options are more evenly used. There is also much more variety in the solutions used, whereas AWS Lambda and Kubeless were the clear leaders in North America and Europe.

Relating back to CNCF projects, a small percentage of respondents are now evaluating (3%) or using CloudEvents in production (2%). CloudEvents is an effort organized by CNCF’s Serverless Working Group to create a specification for describing event data in a common way.

Cloud Native is Growing in China

As cloud native continues to grow in China, the methods for learning about these technologies becomes increasingly important. Here are the top ways respondents are learning about cloud native technologies:

Documentation

50% of respondents are learning through documentation.Each CNCF project hosts extensive documentation on their websites, which can be found here. Kubernetes, in particular, is currently working on translating their documentation and website across multiple languages including Japanese, Korean, Norwegian, and Chinese.

29% of respondents are learning through technical webinars. CNCF runs a weekly webinar series that takes place every Tuesday from 10am-11am PT. You can see the upcoming schedule and view recordings and slides of previous webinars.

The Cloud Native Community in China

As cloud native continues to grow in Asia, CNCF is excited to be hosting the first annual KubeCon + CloudNativeCon in Shanghai this week. With over 1,500 attendees at the inaugural event, we look forward to seeing the continued growth of cloud native technologies at a global scale.

To keep up with the latest news and projects, join us at one of the 22 cloud native Meetups across Asia. We hope to see you at one of our upcoming Meetups!

The pool of respondents represented a variety of company sizes with the majority being in the 50-499 employee range (48%). As for job function, respondents identified mostly as Developers (22%), Development Manager (15%), and IT Managers (12%).

Respondents represented 31 different industries, the largest being software (13%) and financial services (11%).

This survey was conducted in Mandarin. You can view additional demographics breakdowns below: