When we run an application under orchestration, we no longer control which machine a piece of code will run on. Does this constitute a security weakness? How do we cope when security patches need to be applied in response to vulnerabilities? In this talk we will see how automation and DevOps processes can help us address these concerns, and we will explore the properties of containerised microservices that help us keep our software safer when our deployments come under attack.

Kubernetes has enabled an entire new generation of applications - deployed using easy to read, declarative language and distributed across many machines and clouds. Machine learning’s need for large scale, portable workloads has been one of the primary beneficiaries.

In this talk, we will cover the Kubeflow project, a cross-industry effort to make machine learning on Kubernetes simple, portable and composable and unlock an entire industry of data scientists and developers to engage with this new field.

Kubernetes has become a go-to solution for container-based infrastructure. However this technology is quite new for today companies, and managing this new kind of infrastructure comes with its own challenges.

At Algolia, we have been using Kubernetes for the last two years to collect, store and process the logs of our search engine infrastructure which is spread across thousands of servers in 16 regions. Kubernetes has and is still scaling with us (volumetry is doubling every year) to process more than 1B+ line of logs everyday and produce near-real time metrics and analytics data to our customers.

In this talk, we will present our hybrid infrastructure (bare-metal for the search engine and cloud-based for our backend systems), explain why we made this choice and why it is still relevant to us today to run on managed Kubernetes clusters with Google Kubernetes Engine (GKE).

Some weeks ago, I was at the KubeCon + CloudNativeCon EU. 3 main topics that
are not as well-known as they should be were mentioned a lot:

- Service Mesh
- Serverless/FaaS
- Modern observability / Tracing

For most people, these topics are really fuzzy since they are somewhat recent.
It's easy to wonder what Service Mesh offers compared to traditional Load Balancing, what technology you should pick among Linkerd, Conduit, Envoy, Istio, when they all seem to do the same things. For Serverless and FaaS, one could wonder what they bring to the table when we already have orchestrators and containers that we can deploy in one command. And finally, it's easy to get lost among all the monitoring paradigms: metrics, logs, tracing... why everyone and every product is onboarding some tracing capabilities recently?

We'll talk about all of this during these 20min. It may seem a lot, but we'll go straight to the point as ``what's the need it's looking to address and why should I care``

The Cloud Native technologies are changing the technology world today. Open Source projects like Kubernetes, Prometheus, gRPC, Helm and others are the leading choices for building the modern, scalable, reliable and performant microservices-based environments.

In this talk, Ihor Dvoretskyi, Developer Advocate at the Cloud Native Computing Foundation (CNCF) will provide an overview of the projects, hosted by CNCF, and their role in the Cloud Native world.

At Datadog we help thousands of organisations monitor their infrastructure and applications. In this session, we’ll dive deeper into the several hundred trillion data points we’ve gathered to extract information about the real-world use of containers and explore trends in container use. Furthermore, we’ll discuss the top applications being used in containers and, using the data, provide insight into which metrics you should watch and how to troubleshoot based on those metrics. Finally, we’ll look at a framework for your metrics and how to use it to find solutions to problems that will inevitably occur.

We will cover the three types of monitoring data; what to collect; what should trigger an alert (avoiding an alert storm and pager fatigue); and how to follow the resources to find the root causes of problems. Although the real-world container use data is derived from Datadog users, the focus of this session is not tool specific, so attendees will leave with strategies and frameworks they can implement in their container-based environments today regardless of the platforms and tools they use.

There is a lot of discussion nowadays on how to use containers in production, are you there already? When operating a production platform we should prepare for failure and in addition to monitoring working metrics, we cannot forget about the most common failure points. From monitoring solution agnostic perspective, and following a use-case driven approach, we will learn the most common failure points in a Kubernetes infrastructure and how to detect them (metrics, events, checks, etc).

Using Docker, AWS Fargate and CodePipeline, I’ll demonstrate how to go from proof of concept on a laptop to production in 45mins. Not only this but all attendees will be given the URL to the code and examples on GitHub so they can take away what they’ve learned and do it in practice in their own time. Best of all this will be a no slides session and all live coding on stage, time to pray to the demo god’s!

When we run an application under orchestration, we no longer control which machine a piece of code will run on. Does this constitute a security weakness? How do we cope when security patches need to be applied in response to vulnerabilities? In this talk we will see how automation and DevOps processes can help us address these concerns, and we will explore the properties of containerised microservices that help us keep our software safer when our deployments come under attack.

Kubernetes has enabled an entire new generation of applications - deployed using easy to read, declarative language and distributed across many machines and clouds. Machine learning’s need for large scale, portable workloads has been one of the primary beneficiaries.

In this talk, we will cover the Kubeflow project, a cross-industry effort to make machine learning on Kubernetes simple, portable and composable and unlock an entire industry of data scientists and developers to engage with this new field.

The Concourse website states “Concourse is an open-source continuous thing-doer.” (https://concourse-ci.org). It is a great tool to implement continuous integration and continuous delivery pipelines, that are both fast and reliable. It is built for the cloud, using cloud-native principles and tools, by the folks who work on Cloud Foundry (CF) at Pivotal. They leveraged the knowledge of containers they gained while building CF, and the custom CF container backend called Garden - which is runC-compatible.

In this talk, you’ll first learn the details of how Concourse works, the underlying principles and the architecture. We’ll then dive into a live demo: we’ll build a pipeline for a typical web-app from scratch, iteratively, showcasing the different principles and tools.

3 years ago, Meetic chose to rebuild it’s backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy. The real challenge is set in end-to-end testing, even more when a feature can involve up to 60 different components.

To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :

- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Optimize containers and Kubernetes configuration to handle dozen of namespaces running simultaneously
- Have a clean-up process to destroy testing environment after tests

To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack.

But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others. This is typically why Meetic chose to develop it’s own tool to manage namespace through a cli tool, and a REST API on which we can plug a friendly UI.

Managing over 50 namespaces each running up to 60 containers create issues on memory and CPU usage. This is where container and Kubernetes configuration optimizations takes on its full meaning.

In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request and optimizing them. Then we will approach optimizations on Kubernetes side and namespace management.

Outfittery’s mission is to provide relevant fashion to men. In the past we relied purely on our stylists to put together the best outfits for our customers. Right now we are in the process of adding more and more intelligent algorithms to augment our human experts.

To support that we’ve built complex decision making platform. But there are bunch of additional piece of functionality around that powerful platform that we don’t want to build-in. For example, intermediate data transformation or Slack notification upon certain events, etc. After a research and evaluation we choose Kubeless with Serverless framework on top of it.

Personally for me that was a moment to start with Go programming. One core functionality was missing - secret support. Making scheduled triggers working wasn’t easy to get done as well. But by now we have a setup that makes our life easier.

When we run an application under orchestration, we no longer control which machine a piece of code will run on. Does this constitute a security weakness? How do we cope when security patches need to be applied in response to vulnerabilities? In this talk we will see how automation and DevOps processes can help us address these concerns, and we will explore the properties of containerised microservices that help us keep our software safer when our deployments come under attack.

Kubernetes has enabled an entire new generation of applications - deployed using easy to read, declarative language and distributed across many machines and clouds. Machine learning’s need for large scale, portable workloads has been one of the primary beneficiaries.

In this talk, we will cover the Kubeflow project, a cross-industry effort to make machine learning on Kubernetes simple, portable and composable and unlock an entire industry of data scientists and developers to engage with this new field.

This talk is about what lies at the foundation of Dropbox infrastructure – its orchestration engine & the runtime environment. I won’t be revealing secret mind-blowing technologies or black magic tricks – but rather tell you how we build reliable infrastructure to power products that people trust.

This talk will touch on several foundational components of Dropbox’s infrastructure platform – which is used to manage the whole Dropbox server fleet starting from hardware provisioning to package management to distribution to runtime environment. Specifically, I’m going to chat about the service delivery and runtime systems and cover the following topics:

- Their origins: novel-for-their-time design ideas from before the containers era and why some of it still makes sense – like a torrent-based image registry.
- Their evolution: how we transform these systems to embrace modern infrastructure trends such as containers, code as the source of truth & immutable infrastructure.
- Their future: what challenges we anticipate and what we’d like our infrastructure to look like in the coming years – and, most importantly, how do we move fast without breaking things.

Orchestrating containers in a cluster is now an accessible task thanks to projects such as Kubernetes, Mesos, Docker Swarm mode or Nomad. Meanwhile, have you ever wondered why these tools occasionally fail? Why some of these tools are harder to deploy than others? Why they require very special care in choosing the topology and underlying network and infrastructure?

All of these systems have a very similar architecture: they use a Consensus based mechanism (majority of nodes agrees on a change) to spread metadata in the cluster about nodes and running containers. You will often observe that they use either Zookeeper or Etcd as a store of key/value metadata. They also used predefined roles for machines such as Manager or Agent, with generally a much larger number of Agents (pulling tasks to execute) than Managers (responsible for the scheduling process and managing the cluster of nodes).

While such an architecture makes it easier for developers and administrators to reason about the system (everything is ordered in a strict timeline, ie. a cluster of machines behaves as a single machine), it generates a few downsides. When losing what is called a quorum (ie. losing a majority of Manager nodes), existing containers keep running but the orchestration system becomes unavailable and no more containers could be scheduled. Additionally, in an unreliable network environment (ie. Shared infrastructure or Cloud), the system could be struck by a network partition or fail to deliver and receive acknowledgments for messages sent due to high latency, thus creating periods of infrastructure unavailability.

In this talk, we will be exploring ways to make orchestration systems more reliable and easier to deploy through decentralization, exploring Multi-Agent (self-organizing) systems and the use of Conflict-Free Replicated Datatypes to spread metadata in the cluster. We will introduce different categories of consistency guarantees and explain why Causal Consistency (as opposed to Consensus/Strong Consistency, which is currently used in the current generation of orchestration tools) may be sufficient to schedule and orchestrate containers in a cluster.

We will demonstrate an example of such a system and try out different failure modes. We will finally explain the downsides of decentralization, and why this may require new tooling and strategies to administer and debug transient failures in the system.

The end-goal is to spark a discussion on how we could improve upon existing solutions and create tomorrow’s next generation of container orchestration tools.

Kubernetes is very famous for orchestrating containers. With this in mind, it also embeds some auto-scaling feature, which can scale a (micro) service deployment based on CPU or memory usage. That said, very few people are aware that Kubernetes can go one step further: scaling a (micro) service based on application layer information (could be response time, number of processing in parallel, etc...).
This talk will introduce how people can enforce an application response time SLA using:
- HAProxy as an ingress controller, which provide both load-balancing / reverseproxying + monitoring of the (micro) service response time
- Prometheus to collect the statistic data and format it- Kubernetes custom API endpoint to present Prometheus data inside the Kubernetes cluster
- Kubernetes Horizontal Pod Autoscaller, which can take scale in / scale out decisions based on monitoring information available in the kubernetes custom API endpoint (the one polled from Prometheus, which itself polls it from HAProxy
- Kubernetes Ingress controller, to close the loop, that will re-configure HAProxy (on the fly) based on the scaling information provided by HPA service

And of course, this won't be a slide-ish only presentation, but also a nice live demonstration.

5 years ago, Docker wasn't even released officially, no orchestration/scheduling tool was existing or mature enough in the open-source world. However, to build such a Platform as a Service hosting company, these services are required.

This talk covers how choices have been made while building the Scalingo platform, with the urge of producing a stable, production-ready, third-party applications hosting solution. It includes the emergences of Swarm, Kubernetes and other tools of this changing ecosystem as well as why those tools are not one fit all approaches of containers orchestration, especially when business rules are highly bound to the orchestration itself.

What is a container? Is it really a “lightweight VM”? What are namespaces and control groups? What does a host machine know about my containers? And what do my containers know about each other? In this talk Liz will live-code a container in a few lines of Go code, to answer all these questions and more, and show you exactly what’s happening under the covers when you run a container.

At Weaveworks, we need more than the L4 load balancing offered today with the Kubernetes Service abstraction. The Service & Endpoint objects have some extraordinary untapped powers: they can be used to build artisanal, high-level load balancing and session affinity schemes. This talk will present modern L7 load balancing, the various possible load balancing architectures in a Kubernetes cluster and demonstrate a tiny reverse proxy implementing service affinity using consistent hashing with bounded load.

Speakers

Léo Unbekandt - Scalingo

How have we been building a containers-based PaaS these last 5 years?

Entrepreneur and Hacker, building new stuff thanks to technology is what I love to do. CTO and co-founder of the PaaS provider Scalingo for 4 years, I’ve focused my efforts on building distributed infrastructures, and hosting-related technologies especially linux containers. I like telling the world how it is to run such things in production and how it has been built. I’ve been working on containers orchestration and scheduling for a while, glad it’s something cool today :. Otherwise I like cooking and hiking, but that’s another story.

Léo Unbekandt - Scalingo

How have we been building a containers-based PaaS these last 5 years?

Damien Lespiau - Weaveworks

Kubernetes L7 Load Balancing Without a Service Mesh

Software Engineer - Weaveworks

Damien has spent way too much time playing with Linux over the past 20 years. He has worked on all sorts of embedded products, is a GNOME contributor, has spend years in the Linux kernel making Intel GPUs behave and is now working at Weaveworks, helping define what the future of containers looks like.

Damien Lespiau - Weaveworks

Kubernetes L7 Load Balancing Without a Service Mesh

Daniel Garnier-Moiroux - Pivotal Labs

Concourse: container-first continuous integration & delivery

After being a consultant and business analyst for a little while, Daniel came back to doing what he loves the most, software engineering. He has applied his skills in numerous business domains, such as e-commerce, railway planning or fire department operations management. He has a keen interest in automation, and more broadly in improving software engineering productivity.

He is now a software engineer at Pivotal Labs in Paris, where he practices and teaches extreme programming on a daily basis, TDD’ing and pairing like crazy.

Stéphane Teyssier - WeScale

Stéphane Teyssier - WeScale

Shoot-em up interactif des orchestrateurs

Alexandre Beslic - Mantissa Labs

Improving on the reliability and operational complexity of container orchestration systems

Alexandre Beslic is a Software Engineer at Vente-Privée. He is currently working on topics such as High-Availability of distributed services and ensuring data storage replication and consistency. Additionally, he explores new ways to manage large clusters of machines at Mantissa Labs, a personal project he started. Previously he was responsible for orchestrating containers at Docker with the Swarm and Swarmkit projects where he notably worked on the distributed datastore that powers Docker Swarm mode. Alexandre has an MSc in Computer Science/Distributed Systems and Software from the Pierre and Marie Curie University (Paris VI Sorbonne).

Alexandre Beslic - Mantissa Labs

Improving on the reliability and operational complexity of container orchestration systems

Laurent Grangeau - Sogeti

Continuous Delivery de mes conteneurs en production

Laurent Grangeau is a Cloud Solution Architect at Sogeti with more than 10+ years of experience. Former Java developer, he has since developed in .NET, with Agile and DevOps mindsets. He has been experimenting with cloud providers for more than 5+ years. Docker enthusiast from the beginning, he has experienced with building microservices and distributed systems. He loves to automate things and run distributed applications at scale.

Laurent Grangeau - Sogeti

Continuous Delivery de mes conteneurs en production

Sébastien Lavallée - Meetic

Building an end-to-end testing strategy on top of Kubernetes in a world of microservices

Sébastien is a tech enthousiast. As a lead backend developper, he worked on multiple architectures and languages granting him the ability to be a chameleon of the code. At meetic he is a Green Lantern of the developpers often looking out for good practices and the well being of the code

Sébastien Lavallée - Meetic

Building an end-to-end testing strategy on top of Kubernetes in a world of microservices

David Aronchick - Google

Cloud Native ML with Kubeflow

David Aronchick was the Senior Product Manager for the Google Container Engine and led product management on behalf of Google for Kubernetes. David has been helping to ship software for nearly 20 years, founding and being part of the management team for three different startups, as well as squeezing in time at Microsoft, Amazon, Chef, and now Google. David is co-founder of the Kubeflow project, an effort to help developers and enterprises deploy and use ML cloud-natively everywhere.

David Aronchick - Google

Cloud Native ML with Kubeflow

Ludovic Vielle - Jobteaser

JobTeaser, destination Kubernetes

Ludovic is a backend software engineer. After working for two years on the core application of JobTeaser, he has left for aventure and has drown into the ops side of Kubernetes. Now he's dreaming about operators, autoscaling and self-healing.

Ludovic Vielle - Jobteaser

JobTeaser, destination Kubernetes

Alex Diaz - Sysdig

15 Kubernetes failure points you should watch

Alex is Sales Engineer at Sysdig covering the EMEA region focused on monitoring, alerting, troubleshooting and security solutions for containers and microservices. He’s been helping enterprises optimize and accelerate systems and applications for more than 10 years. Prior to joining Sysdig, Alex worked at Riverbed Technologies in multiples roles including Escalation, Distribution and Sales engineering in San Francisco and Paris. Alex holds a Bachelor in Electric Engineering from Concordia University in Montreal, Canada.

Alex Diaz - Sysdig

15 Kubernetes failure points you should watch

Liz Rice - Aqua Security

Liz Rice is the Technology Evangelist with container security specialists Aqua Security, where she also works on container-related open source projects including kube-bench and manifesto. This year she is Co-Chair of the CNCF’s KubeCon + CloudNativeCon events taking place in Copenhagen, Shanghai and Seattle.

She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, and competing in virtual races on Zwift.

Liz Rice - Aqua Security

Anthony Seure - Algolia

Kubernetes: Should you use it for your next project?

I’m Anthony, working as a software engineer at Algolia for more than 2 years. With a team of 4 people, I’m working and maintaining the entire software / cloud-based stack which collects, stores and processes the logs of the entire Algolia search engine infrastructure (thousands of servers handling billions of events per day across 16 regions in the world). I’m working with Go, Kubernetes and GCP on a daily basis and a bit of Scala. On my free time, I’m reading more and more about programming language theory, learning piano and watching a lot of movies.

Anthony Seure - Algolia

Kubernetes: Should you use it for your next project?

Andrey Sibirev - Dropbox

Bedrock of Dropbox

Andrey Sibirev is an SRE in Dropbox, New York City. He is mostly focusing on the Dropbox infrastructure platform used to manage the whole Dropbox server fleet starting from hardware provisioning to package management to distribution to runtime environment. As a part of the Bedrock team, he’s responsible for making sure that Dropbox engineers are able to deliver new features and solve problems as fast as possible and in the most efficient way.

Jérôme Devoucoux - WeScale

Hashicorp Nomad - Une architecture microservices lean en Prod

Christopher Cho - Google

Cloud Native ML with Kubeflow

Chris has been working to make enterprise software users' day better through technology consulting, Cloud migration, and now Machine Learning projects. Chris spends his time being a product manager on Kubeflow and AI practice lead for Google Cloud EMEA region to help enterprise users take advantage of the best ML platform in the market.

Christopher Cho - Google

Cloud Native ML with Kubeflow

Ihor Dvoretskyi - Cloud Native Computing Foundation

The Cloud Native Way

Ihor Dvoretskyi is a Developer Advocate at Cloud Native Computing Foundation, focused on Kubernetes-related efforts in the open source community. He is a co-founder and co-lead of the Kubernetes Product Management Special Interest Group (SIG-PM), focused on enhancing Kubernetes as an open source product. Besides that, he is participating in Kubernetes release process as a features lead for multiple Kubernetes releases. With a deep engineering background, Ihor has been responsible for projects tightly bound to the cloud computing space, containerized workloads and Linux systems.

Ihor Dvoretskyi - Cloud Native Computing Foundation

The Cloud Native Way

Baptiste Assmann - HAProxy Technologies

Enforcing (micro) service response time on Kubernetes with HAProxy and Prometheus

Baptiste Assmann, Principal Solutions Architect at HAProxy Technologies.
HAProxy expert also involved in some major bugs/features in HAProxy: runtime DNS resolver, Send/Expect checks, ...
Since around 15 years, he is involved in high performance web architectures for different kind of usage: web hosting, CDN and finally software load-balancer / reverse proxy!

Baptiste Assmann - HAProxy Technologies

Enforcing (micro) service response time on Kubernetes with HAProxy and Prometheus

Daniel Maher - Datadog

Monitoring Containers: Follow the Data

Dan is a long-time system administrator - he first installed Linux on his home PC in 1995 and never looked back. A veteran of the original dotcom bubble, he founded a web hosting company in the late 90’s, and has since worked in a variety of environments from start-ups to global corporations, including a stints as a university lecturer and a day labourer. Today, Dan is a Technical Evangelist at Datadog, a role that allows him to satisfy his obsession with classifying and measuring things in general.

David Gageot - Google Cloud

Ric Harvey - AWS

AWS technical developer evangelist, with a passion for containers and ci/cd. Live demo’s and less slides are my mission.

Ric Harvey - AWS

Practical AWS Fargate

Sébastien Le Gall - Meetic

Building an end-to-end testing strategy on top of Kubernetes in a world of microservices

As a tech lead backend, Sébastien works both on the Meetic microservices development using PHP, the event bus stack using Scala and the automatization concerning the development environments and the CI/CD chain. Sébastien also use Go a lot when it comes to build tooling that make the developers life better.

Sébastien Le Gall - Meetic

Building an end-to-end testing strategy on top of Kubernetes in a world of microservices

Thomas Auffredou - Xebia

JobTeaser, destination Kubernetes

Thomas is a unrestricted tech consultant in Xebia. Behaving as a SRE in JobTeaser, he operates at building shiny and resilient systems and providing tools to his fellow dev teams.

Thomas Auffredou - Xebia

JobTeaser, destination Kubernetes

Alexis ``Horgix`` Chotard - Xebia

Software & Systems Engineer

Alexis ``Horgix`` Chotard is a french systems and software engineer currently working at Xebia (https://xebia.fr/).

With a software engineering background, and experiences more inclined toward systems and infrastructure, he naturally finds himself at home around ``DevOps`` topics. Alexis ``Horgix`` is eager to automate everything he can and currently loves working on various topics such as continuous integration and deployment, or design of dynamic architectures and applications integration into these. He's happy to face new challenges with the gain in popularity of containers and to look for evolutions of traditional workflows. On his spare time, he also maintains a bunch of Arch Linux packages, contributes to Open Source projects and plays with Cloud Native solutions to be able to use them in real clients projects