Agenda

DockerCon attendees can now build their schedules in the DockerCon Agenda Builder. Check out the schedule, filter based on your interests, experience, job role, and, for those with a DockerCon registration, get recommendations based on your profile and marked interests.

Tracks

Using Docker sessions are introductory sessions for Docker users, dev and ops alike. Filled with practical advice, learnings, and insight, these sessions will help you get started with Docker or better implement Docker into your workflow.

Docker Best Practices sessions provide a deeper dive into Docker tooling, implementation, and real world production use recommendations. If you are ready to get to the next level with your Docker usage, join this track for best practices from the Docker team.

Use case sessions highlight how companies are using Docker to modernize their infrastructure and build, ship and run distributed applications. These sessions are heavy on business value, ROI and production implementation advice, and learnings.

One way to achieve a deep understanding of a complex system is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what we do in the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.

The Edge Track shows how containers are redefining our technology toolbox, from solving old problems in a new way to pushing the boundaries of what we can accomplish with software. Sessions in this track provide a glimpse into the new container frontier.

The transform track focuses on the impact of change - both for organizations and ourselves as individuals and communities. Filled with inspiration, insights and new perspectives, these stories will leave you energized and equipped to drive innovation.

Ecosystem Track showcases work done by sponsoring partners at DockerCon. Ecosystem sessions include a diverse range of topics and opportunity to learn more about the variety of solutions available in the Docker ecosystem.

Sessions

Learning Docker From Square One

Being a newer technology, Docker has yet to make its way into some computer science training programs. College programs, bootcamps, and online resources have yet to jump onto the container train; so, what's the best way for newer engineers to learn Docker from square one? Chloe (former actress turned developer) tells her story about how she went from wondering "What's a Docker?" to helping teach others about Docker and instead asking "What?? You haven't heard of Docker?". This talk is perfect for anyone new to Docker looking for how to get started, or for those interested in learning how to teach Docker to new users.

Docker?!?! But I'm a SysAdmin

Your developers just walked into your cube and said: "Here's the new app, I built it with Docker, and it's ready to go live." What do you do next? In this session, we'll talk about what containers are and what they are not. And we'll step through a series of considerations that need to be examined when deploying containerized workloads - VMs or Container? Bare Metal or Cloud? What about capacity planning? Security? Disaster Recovery? How do I even get started?

Creating Effective Images

Sick of getting paged at 2am and wondering "where did all my disk space go?" This has actually happened to me, and you can learn from my mistakes! New Docker users often start with a stock image in order to get up and running quickly, but that isn't always the right answer. Creating efficient images is often overlooked, but important. Beyond saving resources, using minimal images also delivers important security benefits: include only what you need and not a whole runtime that might have security vulnerabilities. In this session, I'll talk about how to create effective images and lessons I've learned from running containers in production at a number of startups. I'll also cover topics like "how do layers work?" and some things you should think about when creating your images, such as; choosing or creating the right base image; the importance of caching; using RUN statements conservatively; cleaning up as you go. I'll also address best practices; both at a high level with multi-stage builds; and some language-specific best practices, for example, tips and tricks for creating containers for Node.js vs Go. To illustrate these points, we'll cover: * How layers work? * Choosing a base image vs. creating your own * The basics of building minimal images and the importance of choosing a base image vs. creating your own * The basics for building minimal images and the importance of caching * High level best practices for Linux containers (in general, and some language specific examples). * High level best practices for Windows container images. * New and improved: multi-stage builds * Good vs. not so good Dockerfile examples * Docker Image Scanning, and other friends. * What's up next? Looking to the future for more optimization.

Modernizing .NET Apps

Docker has the potential to revolutionize how we build, deliver, support and even design software. But it doesn't have to be a violent revolution. The end goal might be breaking your existing ASP.NET monolith into microservices which run cross-platform on .NET Core, but the first step can be as simple as packaging your whole .Net Framework application as-is into a Docker image and running it as a container.

In this session, we'll take an existing ASP.NET WebForms application and package it as a Docker image, which can run in a container on Windows Server 2016 and Windows 10. We'll show you how to run the app and a SQL Server database in Docker containers on Windows, and how to use Docker Compose to define the structure of a distributed application.

Then we'll iteratively add functionality to the app, making use of the Docker platform to modernize the monolith without a full rebuild. We'll take a feature-driven approach and show you how Docker makes it easy to address performance, usability and design issues.

Tips and Tricks of the Docker Captains

Tricks of the Captains Docker Captain Adrian Mouat will present a grab bag of tips and tricks for getting the most out of Docker. These tips are aimed at avoiding common pitfalls, addressing common misunderstandings and making common operations easier. Topics covered will include: - Build Processes - Security - Volumes - Databases - Debugging and Maintenance - Calling Docker from Docker Whilst aimed primarily at new and intermediate users, even advanced users should pick up some new information. This talk will make your daily life with Docker easier!

Building a Secure Supply Chain with Docker

Creating a Secure Supply Chain of images is vitally important. Every organization needs to weigh ALL options available and understand the security risks. With so many options for images, it is tough to pick the right ones or even to create your own. Ultimately, every organization needs to know the provenance of all the images. Then once the images are imported into the infrastructure, a vulnerability scan is vital. Docker Trusted Registry with Image Scanning will give organizations insight into any vulnerabilities. Better yet, its automated with a succinct audit trail, so you can still take that vacation you had planned and make your security team happy.

Practical Design Patterns in Docker Networking

Dan Finneran, Docker |

Migrating an application to Docker creates an opportunity to utilize new networking topologies and features, which can provide new functionality to an existing application. This talk will provide an overview of Docker networking with a focus on the architectural choices when migrating applications. Taking sample applications we will look at the existing networking topology and cover the options available to create a simple migration and provide additional functionality.

The Road to Docker Production: What You Need To Know and Decide

DevOps in the Real World is far from perfect, yet we all dream of that amazing auto-healing fully-automated CI/CD micro-service infrastructure that we'll have "someday." But until then, how can you really start using containers today, and what decisions do you need to make to get there? This session is designed for practitioners who are looking for ways to get started now with Docker and Swarm in production. This is not a Docker 101, but rather it's to help you be successful on your way to Dockerizing your production systems. Attendees will get tactics, example configs, real working infrastructure designs, and see the (sometimes messy) internals of Docker in production today.

Alpla is a global manufacturer of plastic bottles, used for product packaging by Unilver, L’Oreal, Coca Cola, and many other popular consumer brands. In this session you will learn how they are using real-time data, generated by hundreds of sensors on each of their product lines across 170 factories, to increase equipment efficiency. We will explain the stack from sensors to database (Kafka & CrateDB) to dashboard (Grafana), and how it is being scaled 24x7 on Docker Enterprise.

|

What's New in Docker

|

It’s the first breakout after the keynote and you need to know more about all the latest and greatest Docker announcements. We've got you covered! In this session, the Docker team will go deeper, looking into what's new with Docker, demoing the latest features and answering your questions.

Docker Multi-arch All The Things

In this talk, Phil and Michael will talk about how Docker was extended from x86 Linux to Windows, ARM and IBM’s z Systems mainframe and Power platforms. They will cover the work and architecture that makes it possible to run Docker on different CPU architectures and operating systems; How porting Docker to a new OS is different from porting it to new hardware; What it means for a Docker image to be multi-arch (and how are multi-arch images built and maintained); How does Docker correctly deploy and schedule apps on heterogeneous swarms.

Phil and Michael will also demo some of the new features that let Docker Enterprise Edition manage swarms with both x86 Linux and Windows nodes as well as mainframes.

The Enterprise IT Checklist for Docker Operations

Enterprises often have hundreds of legacy applications developed by development teams across multiple business units. This presents a series of challenges to IT teams as they architect and support a complex and diverse IT environment. Add to that Docker, containers, and cloud - going beyond the pilot environment to production requires both the technology and best practices. In this session, we will go through a checklist of considerations and best practices providing a framework for smooth Docker production operations.

Using Docker to Secure Traditional Applications without Code Changes

Diogo Mónica, Docker |

Legacy applications often serve critical business needs and have to be maintained for a long time. Some applications may have been written decades ago, grown to millions of lines of code and the team that built and deployed the app may no longer be at your company. This fact poses a particularly challenging problem for the security and availability of these applications.

In this talk, we will focus on securing traditional applications using Docker, and showcase how modernizing these apps by moving them into containers not only makes make them portable and cost-efficient but also allows you to run legacy applications more securely, without having to make code changes. We will review the security features of Docker enterprise edition including isolation, encryption, scanning, signing and more to show how you can reduce the attack surface area of legacy apps and limit the impact of any issues. Live demonstrations will show how to use the features in different security configurations and how to respond and react to incoming threats.

Advanced Access Control with Docker EE

The sharing of computing resources among applications and users solves many challenges and presents opportunities for enterprise IT. It leads to better infrastructure efficiency and the specialization of responsibilities in the IT stack. Shared resources across diverse organizations and applications also introduce new hurdles. Tenants need to access their resources securely and with complete privacy from other tenants. This requires secure segmentation, access control, and more.

Container multi-tenancy is much more than cgroups and namespaces. This talk focuses on the advanced Access Control features in Docker Enterprise Edition that provide the fine-grained control to segment cluster resources. This includes how to design fine-grained roles, the architecture and grouping of resources, and how to apply these as Access Control policy. Walk through practical examples from current production designs and understand how they can be applied to your organization.

Docker Enterprise Edition Deep Dive

Docker Enterprise Edition (EE) is a secure, scalable, and supported platform for building and orchestrating applications across multi-tenant Linux and Windows environments. Join Docker product managers as they walk through how Docker EE addresses challenges faced by enterprise customers, as well as the technical architecture of the solution. They will also deep dive into demos for the latest and upcoming features around application runtime and image management.

Modernizing Traditional Applications: From PoC to Production

A proof of concept is a great way to see if your traditional applications are worth Dockerizing. However, getting that first application into production in an enterprise can pose many challenges, both technical and organizational. In this talk, I will take you through the journey starting with high-level decisions such as what applications and components to Dockerize and methodology, then move on to more detailed decisions such as what components to put in images, configuration management, and version control. I will also cover how this impacts the development pipeline and strategies for operationalizing and scaling out the application onboarding process.

Troubleshooting Tips from a Docker Support Engineer

Ada Mancini, Docker |

Docker makes everything easier. But even with the easiest platforms, sometimes you run into problems. In this session, you'll learn first-hand from someone whose job is helping customers fix these problems. Using Docker Swarm and Universal Control Plane, you can keep your apps running smoothly with minimal downtime.

Docker ON Docker

At Docker, we like to “eat our own dog food” or “drink our own champagne.” Whatever your favorite phrase, the importance of a software company using their own software is critical to relating to our customers. In this talk, we will discuss how the Docker Infrastructure and engineering teams have deployed and operationalized Docker Enterprise Edition (EE) for our staging and production environments, what we have learned in the process, and how it's making Docker EE better.

The Container Evolution of a Global Fortune 500 Company

Jeff Murr, MetLife |

In our new digital economy, keeping up can feel like a never-ending expansion of costly technical overhead. Each “trend” adds net-new operational and capital expenses to seemingly bloated run-rate measures - already challenged by leadership. Containers may feel like just another one of these trends, bringing its own additional expense. At MetLife, however, we sought to make containerization self-funding, allowing us to fuel change and tap into innovation at a large-scale. To do this, MetLife’s ModSquad, challenged established norms to prove that containers worked through production. Then, we asked Docker for help to modernize our traditional landscape to create funding sources to adopt containers, change holistically, and reduce overhead to our bottom line.

This talk picks up where the MetLife story presented at the Austin DockerCon ends: What happens after you’ve done one thing well and you need to expand the revolution? We'll discuss how MetLife leveraged the Modernize Traditional App Program. We’ll discuss planning, preparation, execution and our post-mortem learnings in addition to technical obstacles, mindsets, roles, addressing executive concerns and training. I’ll share how we created regional business cases and roadmaps to create a funding pipeline by technology. Finally, we’ll look at our new forecast and ultimately our new future.

Building a Secure and Resilient Foundation for Banking at Intesa Sanpaolo

Intesa Sanpaolo is one of the first banking groups in the Euro zone, with over 12 million customers and 4,600 branches in Italy. With a lot of traditional monolithic applications that are difficult to maintain and evolve, Intesa turned to Docker to help them both modernize the applications and improve their portability so that they could consider a multi-site architecture across multiple data centers. Using Docker Enterprise Edition (EE), Intesa took the first step to “break the monolith” by containerizing their infrastructure, self-described as an “Infrastructure-as-code” pattern, and now use Docker EE to orchestrate the applications across sites.

In this talk Diego Braga, Infrastructure System Specialist at Intesa, and Lorenzo Fontana, DevOps Engineer at Kiratech will share how they implemented Docker EE along with software-defined networking and storage solutions to validate Intesa’s architectural model and to build a geographical distributed multi-data center cluster, all while saving infrastructure costs and remaining compliant with regulations.

They will highlight their CI/CD process using Docker and Jenkins, how the developer and ops team are now working together to implement a DevOps methodology and Intesa’s ROI in using Docker EE. They will also share Intesa’s future plans, including creating mixed Linux/Windows clusters that use the same overlay network and on-prem/public cloud clusters opportunities.

Shipping and Shifting ~100 Apps with Docker

Alm. Brand has been successfully running greenfield Dockerized workloads in production for nearly two years. However, enterprises are known for their very long-lived and ill-maintained monoliths which are not easily rewritten or relocated, and we have our fair share of those. Focusing on freeing up precious ops time, Alm. Brand ventured to transform all legacy WebLogic apps to run in Docker. The move has provided a golden opportunity to restructure our platform, and has helped push the DevOps agenda in what is probably the oldest company yet to present at DockerCon (1792). Through an awesome live demo, we will demonstrate: * as much as we can of our entire working production setup, boiled down to a Swarm stack file; * how we are able to convert and deploy applications during office hours, unbeknown to the end users; * how to smoothly and transparently handle the transition of users to the Dockerized environment; * how we have streamlined monitoring, logging and deployment across greenfield and legacy apps

How Docker Helps Open Doors at Assa Abloy

Over the past 20 years, Assa Abloy has transformed from a mechanical lock producer to the global leader in door-opening solutions. Today, Assa Abloy is at the forefront of innovation when it comes to digital access solutions. During this talk, we will discuss how Assa Abloy is using Docker EE to build a Common Access Technology platform based on microservices running in containers. We will share the architectural decisions that were made and how those resulted in deploying Docker EE on AWS. We will discuss both the technical challenges Assa Abloy encountered and the organizational changes that affected the way they develop their software. Next, we will share how Assa Abloy plans to roll out on a global scale.

Back to the Future: Containerize Legacy Applications

People typically think of Docker for microservices and try to make the smallest container they can. There are tremendous benefits to a microservices model but those are not the only apps that qualify for containers. Traditional, homegrown, monolithic apps are also great candidates for Docker - why? By containerizing these apps, many of the same agility, portability, security and cost savings benefits can be applied to the hundreds (if not thousands) of apps in your datacenters. But where to begin? Attend this session to learn how to approach modernizing traditional apps (MTA), considerations, the available tools and possibilities.

How Docker is Finnish Railway’s Ticket to App Modernization

VR Group-Finnish Railways is responsible for 118 million passenger rides and moving 41 million tons of cargo a year and is seeing overall growth in rail transit throughout Finland. A priority for the organization is to provide improved customer services, including an improved seat reservation system and bringing modern experiences like next generationmobile apps to their passengers. These improvements require looking at their application portfolio and deciding to either:

In this session, Markus Niskanen, Integration Manager at VR Group, and Oscar Renalias, Sr. Technology Architect at Accenture will discuss how they leveraged Docker EE and the public cloud to be the common platform for these different application modernization projects. They will cover how they are leveraging Docker and the cloud to renew and optimize their application portfolio for greater ROI, leading to organization wide adaptation of DevOps principles and cultural change in an industry that is over 150 years old.

Using Docker to Scale Operational Intelligence at Splunk

With more than 14,000 customers in 110+ countries, Splunk is the market leader in analyzing machine data to deliver operational intelligence for security, IT and the business. Our rapid growth as a company meant that our Infrastructure Engineering Team, responsible for all the common tooling, build and test systems and frameworks utilized by the Splunk engineers, was bogged down with a sprawl of virtual machines and physical servers that were becoming incredibly difficult to manage. And as our customer’s demand for data has grown, testing at the scale of petabytes/day has become our new normal. We needed a reliable and scalable “Test Lab” for functional and performance testing.

With Docker Enterprise Edition, our engineers are able to create small test stacks on their laptop just as easily as creating multi-petabyte stacks in our Test Lab. Support for Windows, Role Based Access Control and having support for both the orchestration platform and the container engine were key in deciding to go with Docker over other solutions.

In this talk, we will cover the architecture, tooling, and frameworks we built to manage our workloads, which have grown to run on over 600 bare-metal servers, with tens of thousands of containers being created every day. We will share the lessons learned from running at scale. Lastly, we will demonstrate how we use Splunk to monitor and manage Docker Enterprise Edition.

Société Générale knows that containers and the cloud are the future of the IT industry and have been using Docker EE for over a year and a half. In this talk, we will share how Docker EE fits into our global strategy and our architecture for integrating the platform to our existing IT systems. We will go over tradeoffs of how we operationalized the platform to provide a highly available CAAS to our global enterprise. Finally, we will share how we are onboarding development teams and deploying their applications to production.

Capgemini: Dutch Kadaster Business Results and Technical Benefits of Becoming Agile and Putting Docker on the Map

Rick Peters, Capgemini |

What Have Syscalls Done for you Lately?

If you've ever written any code - even just Hello World - you've used some syscalls. In this talk, we'll explore what syscalls are, how they are used to set up containers, and how to make your deployment more secure at runtime by limiting the syscalls your containers can make thanks to seccomp and Linux security modules like AppArmor.

We'll also discuss how, if your architecture is broken into containerized microservices, this gives you a great opportunity to improve security by limiting what each container can do. This is where containerized microservices really shine over traditional monoliths from a security perspective - so it's helpful to know about if you're trying to convince your security team that containers are a good idea.

There will be lots of live demos!

LinuxKit Deep Dive

We open-sourced LinuxKit in April 2017 at DockerCon in Austin. In this session, we'll take a detailed look at some advanced topics of LinuxKit ranging from the general read-only filesystem setup, multi-arch image support for x86_64 and arm64, custom network configuration, and kernel debugging and testing.

Container-relevant Upstream Kernel Developments

There is a lot of work going on in upstream Linux by a number of different entities focused on making containers more featureful. For example, namespaced file capabilities, LSM stacking, namespaced integrity management, user-id shifting filesystems, and perhaps even a `struct container` definition in the kernel proper. In this talk, I'll cover several of these sorts of container-relevant patchsets that have been proposed in the kernel, including motivating why they are interesting, as well as discussing where the patchsets need to go before being merged to mainline.

The Truth Behind Serverless

Erica Windisch, IOpipes |

We'll look at how to architect and build a serverless platform and what makes something "serverless". We will dive into the design patterns for serverless applications and how container management solutions must be architected around user requirements.

We will dive deep into how existing cloud-based serverless platforms leverage containers, how they're scheduled, managed, and sandboxed. We'll also look at what improvements we might expect or desire of new and existing serverless platforms.

How and Why Prometheus' New Storage Engine Pushes the Limits of Time Series Databases

Goutham Veeramachaneni, Student, IIT Hyderabad |

The Prometheus monitoring system collects and stores time series data to give valuable insights over hosts, containers, and applications. Its storage engine was designed to be multiple orders of magnitude faster and more space efficient than, say, RRD or SQL storage. However, with the rise of orchestration systems such as Docker Swarm and Kubernetes, and their extensive use of techniques like rolling updates and auto-scaling, environments are becoming increasingly dynamic. This increases the strain on metrics collection systems. To deal with the challenges, a new storage engine has been developed from scratch, bringing a sharp increase in performance and enabling new features.

This talk will describe this new storage engine, its architecture, its data structures, and explain why and how it is well suited to gracefully handle high turnover rates of monitoring targets and provide consistent query performance.

We have introduced Cilium at DockerCon US 2017 this year. Cilium provides application-aware network connectivity, security, and load-balancing for containers. This talk will follow up on the introduction and deep dive into recent kernel developments that address two fundamental questions: How can I provide application-aware security and routing efficiently without overhead embedded into every service? How can container hosts protect themselves from internal and external DDoS attacks? The solutions include:

kproxy: a kernel-based socket proxy which allows for application-aware routing and security enforcement with minimal overhead.

XDP: A lightning-fast packet processing datapath using BPF. The technology is intended for DDoS mitigation, load-balancing, and forwarding.

This talk will deep dive into these exciting technologies and show how Cilium makes BPF and these kernel features available on Linux for your Docker containers.

Deeper Dive in Docker Overlay Networks

The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay.

Container Orchestration from Theory to Practice

Laura Frank, Codeship | Stephen Day, Docker |

Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using Docker’s SwarmKit as a real-world example. Gain a deeper understanding of how orchestration systems like SwarmKit work in practice and walk away with more insights into your production applications.

Gordon's Secret Session

|

What's this session about? Only Gordon knows.

Rock Stars, Builders, and Janitors: You're Doing it Wrong

You know these roles: the rock star, who is always rolling out a new demo or installing a new technology in your stack; the builder, who makes it reliable and makes it scale; the janitor, who cleans up all your messes, writes your docs, and tweaks your configs. Grow an engineering team to a certain size, and these roles reveal themselves and cement themselves into your processes.

You come to rely on these roles and the people who fill them. And that’s bad.

Yes, rock stars get the spotlight, while builders toil away in the background, and janitors are forgotten. But it’s not all about glory. Pigeonholing engineers hurts everyone and can slow down your engineering organization in the long run. If you’re only a rock star, you’ll never understand scale or user experience. If you’re only a builder, you’ll never learn to write clean configs or care about future use cases. If you’re only a janitor, you’ll never appreciate change or technical growth. You need to be all three to succeed.

Learn Fast, Fail Fast, Deliver Fast: The ModSquad Way

The introduction of Microservices and Containers present challenges to the organization that go beyond implementation and operation. These are inherently disruptive technologies and a risk averse enterprise can struggle as the business culture adapts to change. This is likely the case at most companies and perhaps amplified at large, highly regulated, and technically siloed organizations. It’s often in the best interests of the business to be risk averse to disruptive change, even when the benefits are well understood. New ways of doing things force the business to look at old problems through a fresh lens. A focus on success often shields people from the lessons learned from failure. All of these things can slow, or even kill, a project.

At MetLife we tackled change and disruption with a highly focused and nimble innovation team that is empowered to push the envelope, break the rules, and challenge established norms. The good news is that its working! Along the way we have learned valuable lessons that enable the success of the team and disruptive technologies and, ultimately, the business. This talk builds on the MetLife story presented at the Austin DockerCon by focusing on the “enabling” nature of our innovation team – we call it The ModSquad - that rapidly implemented Docker and our first production microservices-based application.

This talk will dive into what worked, and what didn’t work. We’ll talk about executive support and recognition, empowering people, and encouraging a fail-fast mentality. We’ll explore the boundary conditions that we learned along the way that enhance the success of the team, the project, and the business. We’ll dig into how we have grown and evolved the team based on both our successes and failures, and the pitfalls we would have liked to avoid. Finally we’ll take a look at what we think will be the future state of the team, and some of the disruptive technologies we may tackle on the horizon.

A Story of Cultural Change: PayPal's 2 Year Journey to 150,000 Containers

Meghdoot Bhattacharya, PayPal |

Adopting containers at scale is fundamentally a cultural change. In late 2015, PayPal decided to migrate en masse to containers for applications built on many different frameworks over the last 15 years. It was a bold and strategic plan that included how to showcase value of containers to leadership, a phased execution strategy, building the right team to lead, and cultural transformation. Changing application code, deployment methods, and operational tools were at onset non-negotiable. This session will share how the plan was pitched and the learnings that unfolded as PayPal carefully changed everything - and nothing at the same time - to get to 150,000 containers running in production in 2 years.

Letting Science Drive Technology at GlaxoSmithKline

GlaxoSmithKline is a global science-led healthcare company whose mission is to help people do more, feel better, and live longer. In support of this mission, GSK not only discovers, develops and makes medicines that treat a broad range of the world's most common acute and chronic diseases; but also must optimize this process to make them as quickly and widely accessible as possible.

But in practice, innovation and process optimization doesn’t come naturally. Our Scientists will often download and deploy niche open source software to solve key scientific problems, resulting in adoption and abandonment of different software components. Additionally, to support the needs of our R&D organization by embedding analytics within the business, we have several Hadoop clusters with different hardware genotypes each with its own respective entry points. So many entry points can become a nightmare to manage let alone customize based on the needs of the end users.

In this session, Ranjith Raghunath, Director, Big Data Solutions, Data Centre of Excellence and Lindsay Edwards, Head, Respiratory Data Sciences Group, will share how GSK is using Docker to deliver ‘edge nodes on demand,’ decoupling a gateway from a specific cluster, allowing users to move from one cluster to another while keeping everything they have intact, and reducing deployment of a production grade software package from 6 months to a few hours.

My Journey To Go

Before becoming a Gopher, Ashley was a professional photographer, which explains her talent and dedication to creating unique Gopher artwork. However, she found that photography wasn’t paying the bills and with a family to feed, she turned to programming. Prior to enrolling at Hackbright Academy (a software engineering bootcamp for women), Ashley had done some front-end work building websites for photographers and working on SEO. Bootcamp had promised to teach her all of the things you need to know to be an engineer, but despite having the “official” business card 12 weeks later, she didn’t feel like a Software Engineer. Ashley found herself writing Python and working as a Community Manager at Rackspace, which is when she met a Gopher who would completely change her trajectory.

In 2014, Ashley met and became quick friends with Steve Francia @spf13, who made her abandon Python and never look back. Steve had been teaching an introductory course to Go, and urged Ashley to help teach the course, where her fresh perspective on Go was an asset. Ever since then, Ashley has been a valued member and a key contributor in building the Go community.

Join this session to learn how teaching others (even though Ashley was new herself) got her involved in an awesome open source community and ultimately changed her career path. Ashley will share how she contributed without committing code, and her lessons and tips on how anyone can get involved in OSS communities and make an impact.

A Strong Belief, Loosely Held: Bringing Empathy to IT

In this talk, the conversation centers around how to use behavioral economics and other processes to assist in getting IT organizations to adopt DevOps practices. Technology is easy but, people are hard. How can we use game theory to encourage empathy in an organization? How can you, as an individual contributor, help drive positive change in your team, company, and community? This talk fosters thought and dialogue on how to address the people and IT cultural needs as organizations transform.

The Value of Diverse Experiences

Some things in life are within your control, while others are not. And if you belong to an underrepresented demographic, this translates into additional challenges when it comes to having the career you want. Finding a mentor, participating in resource groups, and building your network are all great ways to help you climb the corporate ladder (or more accurately, the jungle gym). For me, refusing to put invisible guardrails around myself led me down an interesting path and allowed me to take charge of my career.

In this session, I’ll share personal anecdotes about my journey from a hardware engineer working on 64 nm chips, to software engineer working on large scale distributed systems, to a manager, where I have the opportunity to make a greater impact by building a strong and diverse team. I’ll cover my learnings, and give attendees tips on how to define what a successful career is for you, and keys to building a diverse team.

Becoming the Docker Champion: Bringing Docker Back to Work

Jim Armstrong, Docker |

You’re at DockerCon and have spent the last two days deep in sessions, the Hallway Track, and networking. You’ve heard the stories, learnings and benefits from large and small organizations that are on their devops and app modernization journey with Docker. You may have even begun to identify multiple use cases for Docker at your work and how it could benefit your business and other teams.

In this session, Jim Armstrong of Docker will share how other Docker users have built their cases for broader use of Docker in their organizations. He will share real experiences of developers convincing their ops teams, ops teams introducing Docker to their developers, and passionate Docker users convincing IT executives to adopt Docker.

We Need to Talk: How Communication Helps Code

To build a successful open source project requires more than just code. As Docker and many other household-name projects show, communication is also an essential ingredient in growing a project to greatness. This introvert-friendly talk will help you level up your development game by highlighting three tools and techniques: user research, InnerSource, and documentation. First, I'll help you apply some basic user research practices to refine your project purpose, vision, and value proposition. Then I'll talk about the role of documentation and effective storytelling in generating interest and feedback from broad development audiences. Next, I'll move on to InnerSource: what it is, how it works, and how it can improve your team's communication and collaboration habits. For this, I'll share real-world examples (including some from Zalando) of how InnerSource enabled teams to develop more effectively and efficiently. Finally, I’ll offer some examples of open-source projects (including Docker) that demonstrate how great communication leads to great software. Ideally, you’ll come away inspired to integrate more communication into your development processes.

Take Control of Your Maps with Docker

Maps are an essential part of many online tools, user interfaces and products. Third-party services, such as Google Maps API, are often used, but thanks to Docker and OpenStreetMap this is changing now. The open-source OpenMapTiles.org project revolutionises how easy it is to deploy world maps from any infrastructure of choice running Docker. Learn how to launch your own map service with containers, how to turn raw OpenStreetMap data into tiles, how to adjust the look&feel and language of the maps, and how to scale production deployment horizontally with Docker Swarm. The maps are powered by open-source software and open data, without vendor lock-in, and are directly usable in web products, mobile applications and online services. OpenMapTiles.org project launched in early 2017 and has been already adopted by Siemens, IBM, GeoCaching, Amazon, Bosch, Planet Labs and others.

Skynet vs Planet of The Apes, Duel!

Two self-managed Docker clusters deployed on public clouds and fight each other in a ruthless battle. One has been designed to resist any form of threat. The other one's only aim is to destroy the first one. Who's going to win?

Through this fantasy, we'll first cover all the technologies concretely used to set up the platforms and run the battle (linuxkit, infrakit, & swarm mode, and even raspberry devices among others), while we'll step back in the second part to address the subsequent architecture stakes involved : reliability, scalability, edge computing, immutability, micro-services, hybridation, distributed storage will hold no secret for you ! Most of all, you'll understand the importance of the synergies implied between the platform's and the app's design to achieve such a result.

How to Secure the Journey to Big Data Microservices - Fraud Management at Arvato Gmbh

Tobias Gurtwick, Arvato Gmbh |

Arvato Infoscore Gmbh, a global financial services subsidiary of Bertelsmann, helps ecommerce companies detect and prevent consumer fraud. Last year, Arvato embarked on an ambitious plan to migrate to a microservices-based architecture with Docker containers as a key enabler. This would enable Arvato to be more effective in processing consumer and device data from customers to detect fraud. But strict data protection laws in Germany means this has to be done securely.

In this case study, Tobias Gurtzick, software engineer, will take you through the details of the architecture, technology, and migration process. He will talk about how to successfully and securely deploy containers for new and refactored legacy apps across different storage solutions and how Docker enables Arvato to maintain and grow these environments. Arvato has also built Machine Learning Grids backed by Docker for a new Big Data app and will share the lessons learned along the way.

.

Dockerizing Aurea

Over a year ago, Aurea teams started on a quest to move our current virtualized infrastructure to a containerized approach to run critical legacy applications. The primary goal was to decrease costs and increase resource usage, but to get there we had to learn a lot. We've now migrated more 2,000 Linux and Windows instances to just over 1,700 Cloud containers. With nearly a year running Docker EE production hosts, we have a great experience to share.

In this session, we will share our most significant learnings, our infrastructure and operational ROIs (yes we will share numbers), the most important monitoring metrics and dozens of other tips & tricks from increased uptime to saving your life. We also want to cover our "dockerization" quest from two perspectives: infrastructure architects and system operators as well as the realities for teams performing dockerization of legacy applications. As we found out, you can’t care about only one and not the other.

Panel: Modern App Security Requires Containers

Using Docker Containers, enterprises now have strong, secure-by-default primitives available for deploying apps to their infrastructure. Containers are enabling organizations to adopt better engineering practices like immutable infrastructure -- increasing deployment agility and reducing mean time to patch. Companies are thinking strategically about to securely manage their software supply chains.

Moderated by Sean Michael Kerner, collaborators in Docker's ecosystem will share how Docker Containers are revolutionizing the way apps are secured and how we can expect container security to evolve in the future.

Linux Containers on Windows: The Inside Story

At the last DockerCon, we showed an early prototype of a Linux container running natively on Windows Server, kick-starting the “LCOW” (Linux Containers on Windows) project. Since then, LCOW has expanded to involve multiple Docker projects, including LinuxKit, ContainerD, and Moby, as well as work from many teams within Microsoft. In this session, John Starks, the architect of Windows containers and lead of the Linux team in Windows, will present an insider’s view of LCOW’s architecture and implementation.

Real World Security: Software Supply Chain

As organizations embrace the modern software supply chain model that Docker enables, the threat model for their apps evolves. Applications are manufactured from many components and providers, shipped to a broad range of distribution centers such as a Docker registry, and deployed to many environments from public clouds to air gapped infrastructure.

We will present the methodology of, and research gathered through, a real world case study on misconfigured registries. From there we will discuss a threat model for the end to end software supply chain; build, to ship, to run. We’ll demonstrate an open source tool that can be used to audit your environment and discuss the steps taken to create an even more “secure by default” configuration for the OSS Docker Registry. Finally, we’ll highlight further best practices to secure your software supply chain.

Play with Docker (PWD): Inside Out

Marcos Nils, Tutorius | Jonas Leibiusky, Tutorius |

Looking Under The Hood: containerD

As we move to our application units to containers most people are asking themselves the question about orchestrator choice. That is not the only choice that’s important, what about the underlying container runtime? In this talk, we will look at why you would use containerD with runC with both Swarm and Kubernetes, but other uses for ContainerD like container OS’s to ship immutable infrastructure.

Experience the Swarm API in Virtual Reality

Docker has a CLI that is a great starting point for programming and automation, but did you know that it's really just a wrapper for the Docker Engine REST API? The Engine has an accessible, well-documented API that's super easy to use. In this session, you'll learn how to consume the Docker API in different ways and be encouraged to build your own tools. We built an app using VueJS and A-Frame that uses the API to visualize Docker Swarm in Web-VR. The combination of these functional, declarative web technologies is a good way to build immersive, interactive experiences around infrastructure that are easily accessible to others. More people should be building these kinds of tools. We'll show how we started building the app in Swarm and how we're able to develop against the API with hot-module-reloading. (This totally works in VR!) You'll also get a run down how we leveraged existing components from the community in order to quickly prototype our application using Docker Stacks. Ultimately, everyone will have a chance to experience interacting with a Swarm through the magic window of their phones. VR is a personal, educational medium. We should be building these virtual worlds around all the components of the Moby Project. Our hope is that we can share ideas and motivate others to create more compelling Docker and infrastructure tooling with Virtual Reality. Expect to leave the session wowed by the power of VR and informed about how to leverage Docker's REST API for your own endeavors.

Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-Compose

Why does any "code lab workshop" or live demo are always such a challenge? A wise sysadmin once told me: “Get your hand dirty with the production to learn”. So I want to tell you a story of getting hand dirties, by creating a code lab environment considered as production. This story will show that we can build a reproducible environment for code-labs workshops, by using the Docker “tools”: the Engine, Swarm Mode, Docker-Compose, Moby, LinuxKit. Following the spirit of “Play With Docker”, but generalized at any service collection, this Codelab toolkit has been used on code-labs workshops of 120+ people. That path was not a free lunch, but the lessons learned will give you an idea on how a training environment can be efficiently done with Compose and Swarm Mode, by treating it as a “production” platform, tackling the plumbing “youth” limitations for the better of your use case. As a trainer, I never learned so much than building something to teach people someone else: this the story I want to tell you, the tale of using Docker as a tool of MASSIVE KNOWLEDGE SHARING, which is the root of growing our industry together.

Android Meets Docker

CI is an eternal topic in software engineering and it is still evolving for mobile. With an introduction to Docker and a well-crafted Docker image for Android, this presentation will guide you through all facts about using Docker for Android CI - benefits, limitations, pitfalls, tweaks, and performance. It's not complete without a LIVE DEMO - showing you how to setup a Jenkins Android worker via Android Docker image with minimal effort in 3 minutes. Follow me, you can build your own Acme CI at zero cost. Last but not least, there are some other tips about how Docker could help your mobile engineering.

Deploying Software Containers on Heterogeneous IoT Devices

The Internet of Things presents the situation of millions of devices collecting and generating data. Processing all of this in the cloud has a huge overhead in bandwidth and latency. In some cases it even has security implications. Moreover, we are dealing with heterogeneous devices with different hardware characteristics: architecture, CPU/GPU/TPU, sensors and actuactors. In this session we see how Docker can be used to carry out continuous deployment on IoT devices, bring the processing of data closer to the edge in a reliable way, and deal with device heterogeneity.

How to Build Machine Learning Pipelines in a Breeze with Docker

Let's review and see in action a few projects based on docker containers that could help you prototype ML based projects (detect faces, nudity, evaluate sentiment, building a smart bot) within a few hours. Let´s see in practice where containers could help in this type of projects.

From Zero to Serverless in 60 Seconds, Anywhere

"You can adopt serverless with Docker and Swarm to take advantage of whatever hardware you have available in a resilient way. Learn what serverless applications are and how to get started in 60 seconds. We'll show the open-source options available and give an overview of Functions as a Service as demonstrated in the Cool Hacks in Austin.

You'll learn how to write functions and deploy them to a Docker Swarm using secrets, metrics, high availability, and auto-scaling."

Containerizing Hardware Accelerated Applications

Many applications allow you to use hardware such as GPUs and FPGAs for acceleration. Common examples include media processing and offloading highly parallel work to a GPU. Applications that use accelerators are resource heavy and have stacks spanning kernel and user space; accelerators often have their own requirements for operating system support and kernel versions. While it may not seem intuitive to containerize this type of application, the use of containers provides benefits such as reduced setup time from container reuse, reduction in dependency conflicts and dependency on a specific operating system, and easier updates. In this session I show a media processing stack, making use of containers alongside a GPU. Specifically, I explain the kernel and user space divide of a hardware-accelerated transcode application using a device exposed to the container. This specific stack is an interesting case because of its dependency on hardware, use of a custom kernel and libraries, and operating system requirements. Our investigations have shown the use of containers has minimal performance overhead compared to running natively. Furthermore, we can quickly deploy on other machines with reduced configuration effort. There are some aspects of the application not suited to containerization, however. Since the application relies on a custom kernel, the use of containers does not necessarily increase portability. Improvement in this area would require rethinking how the applications are developed and distributed. Other areas of innovation include things such as Docker plugins to check for compatibility between the container software and host kernel.

Integrating CERN Software Distribution with Docker

CVMFS is the CERNVM Filesystem, a scalable, reliable and low maintenance software distribution service. It offers a POSIX interface to a set of cache instances distributed worldwide, as part of the infrastructure used to process and analyse the data from the multiple experiments at CERN - including the ones from the Large Hadron Collider (LHC).

In this session we describe how physicists have started moving their workloads into containers, and how we integrated CVMFS into Docker using a Volume plugin. We'll show how repositories are mounted and how we use tags and hashes to mount the repositories at a well known state, essential for data analysis preservation. Finally we'll cover how we're looking into optimizing data access via Docker storage drivers.

Empowering Docker with Linked Data Principles

Docker Containers are eating the world. Industry and academia are both exploiting Docker to orchestrate complex cloud infrastructure and to foster reproducibility of computational experiments. Linked Data capture the original idea of a connected yet decentralized Web. Online resources are accessible through a URI; ontologies and vocabularies like Schema.org foster interoperability and data integration. I advocate the value of applying the Linked Data paradigm to the Docker ecosystem. Dereferencing access to them and making image registries accessible by the means of graph query languages, e.g. SPARQL or GraphQL. To this extent, it is necessary to semantically describe Dockerfiles, images, and containers. and to adopt a data model like RDF.

The Fairy Tale of the One Command Build Script

Tobias Getrost, GN Hearing |

Do you have this build script that with a single command builds your software? Does this still apply on a brand new PC?

This presentation takes you on the journey to construct complex build environments using Docker. The journey follows our lessons learned and experiences going from hand crafted to Dockerized build environments. We will look at different patterns to build modular containers, ways to chain containers and the specialties of Windows containers.

Small, Simple, and Secure: Alpine Linux under the Microscope

Natanael Copa, Docker |

Alpine Linux is a distro that has become popular for Docker images. Why do we need another distro? Why does Alpine matter? How does it differ from other distros?

In this talk, we'll answer all these questions – and a few more.

Repainting the Past with Distributed Machine Learning and Docker

"A picture is worth a thousand words” – Frederick R Bernard. Video is worth thousands more. With millions of hours of black and white video footage circulating around the internet or locked indefinitely in storage archives, their true stories and colours have been lost forever. With the cost of breathing new colour into these fragments of history being $3000 per minute, the vast majority of this footage will never be truly appreciated. Wouldn’t it be revolutionary if we could combine the latest cutting edge research in machine learning with the power unleashed by distributed computing with Docker to solve this problem?

Using recurrent convolutional neural networks combined with advanced scheduling, distribution, and orchestration made possible only by Docker, we are able to realistically hallucinate colour back into any black and white video. And with machine learning, we can cut the costs dramatically by automating the process of painting each frame, resulting in a realistic high-quality video in record time. We will demo this and much more in our talk “Repainting the past with distributed machine learning and Docker"

Docker to the Rescue of an Ops Team

Rachid Zarouali, SYNOLIA |

In this talk, we'll discover how Docker comes to the rescue of the Ops Team, while rebuilding from scratch our monitoring infrastructure. We'll start by quickly describing the challenges, to focus on why and how using docker saved the project. From fixing dependencies and isolation issues, implementing rolling upgrades and new features hot addition, to building a completely modular, scalable and resilient infrastructure, we'll talk about why CI/CD workflows, docker tooling and Docker Swarm were the key to success.

Continuous Packaging is also Mandatory for DevOps

Bruno Cornec, HPE |

While DevOps are comfortable with continuous integration and automatic tests, the area of continuous packaging has not been given the attention it deserves.

Even with containers, delivering an application using software packages provides multiple advantages with regards to file-based installation: it allows to manage dependencies more easily, to provide metadata, checksum, and signature mechanisms, to deal with packages repositories.

But doing that in a continuous packaging approach means that the generation of these packages is fully automated and part of the build process of the software. As a consequence, it eases the various steps of a solution lifecycle (controlled impact of installation/uninstallation,
identical deliveries up to the customer, avoidance of code or metadata duplication)

This presentation will detail the methodological approach around continuous packaging and demonstrate how this can be put in place using an Open Source tool such as project-builder.org and how this allows the MondoRescue project to deliver packages at will for lots of distribution tuples through the same number of Docker containers.

LinuxKit Demo on ARM64

Andrew Wafaa, ARM Limited |

LinuxKit is a container-related project announced by Docker on April in DockerCon 2017 in Austin Texas. In this demo session, Dennis will show how to build a secure, portable and custom Linux distribution with the help of Moby tool on ARM64 platform. He will also demonstrate how to replace/spawn some containerized components from the distribution and re-construct a new namespace-isolation system with the LinuxKit.

Using LinuxKit to Build Custom RancherOS systems

RancherOS, a Linux distribution defined and orchestrated using cloud-init and compose files, is being re-architected to use containerd and LinuxKit. In this talk, we'll show how RancherOS works, with examples, and then use LinuxKit to build several customized versions of RancherOS, demonstrating how easy it can be to build and maintain the container OS that's best for your needs.

Declare Your Infrastructure with InfraKit, LinuxKit, and Moby

Steven Kaufer, IBM | David Freitag, IBM |

InfraKit is the toolkit for creating and managing declarative, self-healing framework with a pluggable architecture. LinuxKit is the toolkit for building secure, lean and portable Linux subsystems. We have been using InfraKit, LinuxKit and the Moby tool extensively to build our Docker offerings for our public Cloud. Come to hear our experiences and lessons learned when using these technologies to build something useful and our experiences on contributing back to the community on these projects.

State of Builder and BuildKit

Tonis Tiigi, Docker |

Overview of the new advancements added to Docker's builder feature in the newest releases and how to use these features to make your build jobs more powerful and efficient. Going to cover multi-stage builds, new dependency model, new performance features, added Dockerfile features etc.

Dive into the new buildkit architecture developed as part of the Moby project and the base for the future of `docker build`. Learn about how to start playing around with buildkit today and what kind of capabilities the new architecture exposes.

|

Introduction to Docker for IT Management

|

The demand for Docker has grown loud enough to permeate across all areas of an IT organization, and for good reason. Find out what all the talk is about and how Docker expedites your journey towards innovation.

The Modernize Traditional Applications Program

|

IT organizations continue to spend 80% of their budget on simply maintaining their existing applications while only spending 20% on new innovation. Docker and industry leading partners can help you flip that around in just a few days and demonstrate how to can cut into that 80% to fund your innovation efforts.

Customer Success Stories & Hands-on Exercises

|

Hear first hand how Docker EE and the MTA program is helping F500 companies transform their organization and give their legacy applications modern capabilities over the course of just a few days.

Partner Solution Demonstration

|

Attend this session to learn how Docker’s top partners can help you define the right mix of Hybrid IT and quickly deliver the first step in your journey towards a modern application architecture.

Understanding the ROI and Building the Business Case

|

In this session we’ll show exactly how to connect the dots between your legacy applications and your modern applications in a Docker EE ecosystem, in order to make the business case. We’ll also go into what ROI looks like and the tremendous impact Docker EE can have on your bottom line.

Most businesses have decided to start a container journey. The speed, simplicity, portability and efficiency value prop of containers resonates with everyone. What is less clear is how to get started. Do you build new cloud native applications on containers? Or do you modernize existing applications in order to gain efficiency? Which software do you use? Where do you run the environment? How do I operationalize?

IBM and Docker Inc have been partners since December 2014. This session will cover the broad scope of capabilities provided by IBM to help your organization answer all of these questions. Whether you want modernize an existing traditional application or develop a new cloud-native app, whether you want to do it alone or get IBM to help. Understand IBM’s extensive capabilities with Docker.

Reduce Ops Cost by 50%, Get to the Cloud in Five Days and Other Such Miracles from HPE and Docker

Matt Foley, HPE |

Ever find yourself needing to move legacy applications off of old platforms but you don’t even have the source code or access to those coders? How about needing to reduce operations cost, improve efficiency and all the while deliver applications 10 times faster? Then come discover the MTA solution from HPE and Docker that checks all the boxes.

Modernizing traditional applications delivers the efficiency, costs savings, portability and agility needed to embrace the journey to hybrid IT. Together HPE and Docker deliver a way to modernize legacy applications, using Docker Enterprise Edition and integrations with HPE infrastructure to accelerate deployments to modern composable infrastructure.

Attend this session to learn how HPE can help you define the right mix of Hybrid IT and quickly deliver the first step in your journey towards a modern application architecture. Learn from key customer use cases how HPE and Docker have empowered businesses to gain improvements in efficiency, portability and speed to market while cutting costs.

Navigating the Docker Toolset in Visual Studio and Azure

Shayne Boyer, Microsoft |

Tooling is usually at the forefront whenever encountering a technology options for pushing your apps to production. Continuous Integration, Command Line, GUI…what’s available? In this session we’ll take a look at the options that allow you to lift and shift your legacy .NET applications using Visual Studio, explore the Azure CLI to create containers in seconds on Azure Container Instances as well as manage your containerized apps in Azure App Service on both Windows and Linux containers. While we’re at it, maybe even some debugging too.

Hacked! Run Time Security for Containers

Gianluca Borello, Sysdig |

Containers have the potential to improve your security posture in production, but the black -box nature of containers and the complexity of distributed microservices present new challenges that InfoSec and DevSecOps teams may not be ready for yet.

Common approaches like scanning and container signatures will get you part of the way, but what happens when your production environment is hit by a zero day threat or unknown event? Do you have the capabilities to detect and protect against that incident?

In this session we will present a robust solution for implementing run-time security monitoring, policy enforcement, and forensics using activity signals based on system calls.

We’ll cover topics such as:

How do I see activity originating within containers?

What does it take to apply policies consistently across all containers that make up a micro service?

How can I get a service-oriented view of container activity based on Docker Data Center or Kubernetes metadata, for the purposes of auditing or forensics?

What can I leverage in open source to make this happen?

You’ll walk away from this talk understanding what types of events to look for, how to alert on them, and what you need to do deep forensics in the event of an incident.

Accelerate your Container Ddoption with Cisco and Docker

Vish Jakka, Cisco |

Accelerate your container adoption with Cisco and Docker

Technologies and Solutions for Cloud Native and Legacy Applications

Using containers or starting to use them for your applications? Containers new to you? Looking for ways to scale and deploy solutions at scale reliably? Have a mix of legacy and cloud native applications? Come attend the session to learn how Cisco and Docker have partnered to offer production-ready turn-key solutions by leveraging best of breed technologies.

Cisco and Docker, in conjunction with our ecosystem partners, have jointly developed products, technologies and validated solutions to accelerate adoption and simplify deployment of containers and microservices across the data center for both legacy and cloud native applications.

The combination of Docker container technology and Cisco UCS server hardware supports highly scalable, resilient, elastic application deployment with the simplicity of the cloud and a full set of enterprise capabilities. And Contiv, a Cisco sponsored open source project, provides a unified networking fabric for heterogeneous Docker deployments on VMs, bare-metal, public and private clouds.

With AppDynamics Microservices iQ, gain fine-grained visibility and monitoring for your applications and Docker environments. Correlated metrics across applications, containers, and hosts provide new level of insights to optimize your end user experience.

Come learn how you can modernize your applications and your datacenter with Cisco Docker solutions.

Continuous delivery with Docker and Bitbucket Pipelines

Aneita Yang, Atlassian |

Bitbucket Pipelines is a fully hosted continuous integration and delivery tool, built on Docker, that lives right inside Bitbucket Cloud. It brings your team the benefits of CI/CD practices without the overhead of configuring and maintaining your own infrastructure. Come and learn how Bitbucket Pipelines can accelerate your team's build/test feedback loop, and how our newly released features can simplify the way you build, test and push Docker images.

Unify VMs and Containers at Every Edge

Chris Brown, Nutanix |

Docker containers bring incredible power and agility to developers but underlying operational challenges – such as storage issues and new skillsets – limit the reach of containers to their own isolated pod. With the Nutanix Enterprise Cloud Platform you can rapidly deploy new Docker Datacenter clusters throughout your entire infrastructure and orchestrate VMs combined with Containers, giving you the ability to move to containers at your own pace and place containers where they might not have fit before. In this technical deep dive we will show how Nutanix can quickly deploy Dockerhosts to new locations, manage them under a single pane of glass, and show how the Docker-certified Nutanix Volume Plugin integrates seamlessly into your Docker environment, allowing your to manage your Docker Volume data just as easily as any other.

Containers aren’t just for Microservices: Migrating Legacy Workloads to Docker

Oscar Renalias, Accenture |

There is a growing demand to leverage containers not only as a platform to run microservices and greenfield applications, but also to run legacy workloads and have them participate in some of the benefits provided by container platforms. In this session, we will openly discuss Accenture’s real-world experience with legacy workload migration to containers and Docker Datacenter, where we succeeded, and where the container ecosystem still has room to improve.”

Cloud Native Storage Patterns with Docker

Alex Chircop, StorageOS |

There is no such thing as a stateless architecture – containerized applications need to store data and state somewhere.

We discuss what a storage platform should look like in cloud native architectures and what is needed to interface composable microservices with advanced containerized storage patterns. This session will also include a demo demonstrating how Docker Volume Plugins can manage a cluster wide storage pool and provide highly available persistent storage volumes to databases and applications.

2. Take advantage of advanced containerized storage patterns to improve high availability and security

3. Understand how to use Docker Volume Plugins in your environment

Making Application Monitoring a Cloud Native Platform Feature

Alois Mayr, Dynatrace |

With the transformation towards Cloud Native platforms, we are seeing a paradigm shift in how self-sufficient teams develop and deploy applications. Microservice teams leverage built-in platform features to run, scale, rollback and upgrade their app deployments. Monitoring helps them to understand if their apps are performing properly and interacting correctly with other apps in production. But how do they know if poor cluster performance impacts application health, and how the overall performance of all apps deployed to a cluster look? In this session, Alois Mayr will explore how monitoring can be made a platform feature to help application teams monitor and maintain application and cluster health. Alois will explain why automatic discovery of dependencies across cluster nodes, containers and services is critical to pinpoint the root cause of degraded application response times and failure rates.

Kubernetes for Docker Users

Darren Shepherd, Rancher Labs |

When Docker was first released four years ago, Developer and DevOps teams everywhere fell in love with it. "docker run", "docker pull", "docker build"… it was so simple, yet incredibly powerful. Docker sparked the creativity of millions of users and, as a result, the container ecosystem exploded. At the same time, Google released Kubernetes, a powerful platform that provides a framework to orchestrate containers. It includes a rich set of features that users and enterprises need to deploy containers in production, including advanced RBAC, multitenancy, and enhanced cluster resource management. But how do you bring together these two historically parallel approaches? In this session, we’ll discuss how you can apply that same simple Docker user experience people fell in love with while taking advantage of all the power of Kubernetes.

Eureka! The Open Solution to Solving Storage Persistence

Chris Duchesne, {code} |

For a while there has been “the storage problem”. It’s a continual evolution of ensuring storage interoperability with cloud native platforms. It’s almost impossible for an end-user or storage-centric developer to keep up with the changes… until now! Get a deep-dive on REX-Ray with its enhanced set of features far beyond any other driver, a new plugin framework to enable any storage platform to interoperate with any scheduler, and new integrations with the Container Storage Interface.

Managing Elephants and Hummingbirds

Stig Skilbred, CA Technologies | Gary Vermeulen, CA Technologies |

Microservices are the hummingbirds of the IT infrastructure and managing them, with the complexity they introduce, is a painful process. Building, testing, securing and deploying create barriers to innovation. Freeing up developers to focus on rapidly building great user experiences by removing the friction between demand and delivery of important functions such as omni-channel in the device world, security with an ever-increasing threat landscape and CICD, completing the feedback loop - giving customers what they need, when they need it. Once deployed, ensuring exceptional user experience becomes critical. Microservices don’t live in a vacuum and abnormal phenomenon make understanding, triaging and guarding against issues, very difficult.

Stronger Security Through Containers and Machine Learning

John Morello, Twistlock |

While some have focused on trying to bend traditional security approaches to fit containers and devops, the larger cyber opportunity has often been missed. Containers, both the core technology and the operational patterns they enable, have some fundamental differences from traditional models. Some of these differences, like the greater rate of change devops enables, can favor the attacker and make legacy security practices less effective. However, other differences, like the declarative nature of containers, enable machine learning to be applied to build ‘allow list’ models of their proper runtime state. This automation enables security to be tailored the app workloads, greatly reducing the need to manually create and manage security policy. In the session, we examine the changes to the threat landscape that containers bring, what fundamental characteristics of containers are different, and how machine learning can leverage these characteristics to automate the creation and management of scalable, app tailored defenses.

Building Robust Services with gRPC

Zack Butcher, Google |

gRPC is a popular open source framework to make RPCs easy and efficient for everyone. This talk gives an overview of techniques and best practices for building services with gRPC that are robust and secure, including: client side load balancing, deadlines, and cancellations; transport security via TLS; utilizing interceptors to address cross cutting concerns like deadline propagation, metrics, and tracing.

Docker, Java, and Databases, Oh My!

Shaun Smith, Oracle |

Learn about how Docker and Oracle are taking Java, databases, and cloud infrastructure into the container-native era. Whether you are building microservices in Java, have an Oracle database somewhere in your application architecture or are thinking what’s next with serverless approaches, find out how Docker and Oracle are collaborating together in new ways to help you develop modern container-native applications.

Running stateful applications in containers presents unique challenges. In this we will demo how to set up, run and scale a Redis Enterprise cluster on Kubernetes. We will also discuss some of the challenges/trade-offs we encountered when working with Redis in a containerized environment.

Monitoring Containers: Follow the Data

Ilan Rabinovitch, Datadog |

At Datadog we help thousands of organizations monitor their infrastructure and applications. In this session, we’ll dive deeper into the several hundred trillion data points we’ve gathered to extract information about the real-world use of containers and see trends in container use.

As we look at container use, we’ll also discuss the top applications being used in containers and, using the data, provide insight into which metrics you should watch and how to troubleshoot based on those metrics. In this session, we will also look at a framework for your metrics and how to use it to find solutions to the issues that come up.

We will cover the three types of monitoring data; what to collect; what should trigger an alert (avoiding an alert storm and pager fatigue); and how to follow the resources to find the root causes of problems.

Although the real-world container use data is derived from Datadog users, the focus of this session is not tool specific, so attendees will leave with strategies and frameworks they can implement in their container-based environments today regardless of the platforms and tools they use.

Run your Docker Apps in Production on Google Cloud with Kubernetes

Mete Atamel, Google |

Docker has fundamentally changed the way people run applications. Kubernetes offers rich primitives for deploying and managing distributed, containerized apps. It helps you reach new levels of availability and utilization, while lowering your ops burden. In this talk we'll explore some of the concepts in Kubernetes and take a look at how Kubernetes 1.6 advances the efforts of cloud native computing

Containers with vSphere Body Armour: Enterprise Ninja Skills

Ben Corrie, VMware |

Do you dream of deploying an OS in seconds with zero maintenance costs? Wish it was easy to scavenge unused compute from your virtual infrastructure? Do you wish you could get a development environment from your IT department without raising a ticket? Ever find yourself scratching your head about production grade security, isolation and performance? You may not have noticed, but the container revolution has transformed the way you can consume vSphere infrastructure. vSphere Integrated Containers allows an IT admin to provide a CaaS portal to tenants who can then use the Docker API as their own private compute cloud. If you manage vSphere environments or consume them, you need to see this!

Join Ben Corrie as he showcases how easy it is to consume and automate virtual infrastructure with VMware's free, open source capabilities. This live demo will have a particular focus on Jenkins integration and developer environments.

The Equifax Breach - Contained by Containers

Benjy Portnoy, Aqua Security |

High profile cyber attacks are on the rise. The latest Equifax breach has impacted millions of American citizens and generated awareness that no organization is immune to the next attack. The presentation will simulate the Equifax breach using the same vulnerability that was exploited but in a containerized environment. I will present why using a containerized application would have enhanced the security posture and mitigated the damage of the breach.

An Enterprise Container Experience: Dev VS. Ops

Magnus Glantz , Red Hat | Jacob Borella, Red Hat |

Take part of a detailed and fun hands-on review of the critical features required in an enterprise grade container platform - by trying to DESTROY IT.

While learning about enterprise grade features and stability of OpenShift Container Platform - Participate in the presentation and become the evil Ops which is bent on trying to prevent the developer from successfully deploying his application (and make the presentation come crashing down).

Best Practices for Securing Containerized Applications

Kumbirai Tanekha, CyberArk |

What are the best practices for securing containerized applications? How can developers secure their containerized applications across the DevOps pipeline? This talk with share practical tips and tricks on how to secure your containerized applications and conclude with a demo of Conjur from CyberArk. Conjur is an open source security service that integrates with popular CI/CD tools to secure secrets, provide machine identity authorization, and more.

Tips and Tools for Running Container Workloads on AWS

Abby Fuller , AWS | Tiffany Jernigan, AWS |

In this session, we'll cover tips and tricks for running container workloads on AWS. This will include topics like: general microservices best practices, some specific tips for ECS and Kubernetes, how to get the most out of your resources, and some ways to optimize and troubleshoot deployments for speed and profit​.

SMACK Stack 2.0

Matt Jarvis, Mesosphere | Mesosphere |

In today’s always-connected economy, businesses increasingly need to provide data-centric real-time services to customers, such as recommendations, targeted advertising and fraud detection. To make these experiences possible, companies have started transitioning from big data analytics, where data is processed in batches after collection, to fast data analytics, where data is processed in real-time to provide immediate insights to companies and their customers.

The SMACK stack–Apache Spark, Mesos, Akka, Cassandra, and Kafka–is establishing itself as a standard for these fast data architectures.

As the SMACK stack emerges as an industry standard, it is evolving in multiple dimensions: Individual components: Even though the name SMACK is derived from specific components (which are themselves maturing and gaining functionality over time), in practice, data architects will swap individual components to of the stack to fit their specific needs. Consider for example, using Flink for stream processing, or ElasticSearch for storage.

New use cases: While often initially considered for fast data processing, we are seeing SMACK users take advantage of the characteristics of the stack such as platform elasticity for example for exciting new use cases.

In this talk, we will discuss how the properties of the SMACK stack can help you rethink your architecture, with an emphasis on the impact of current and future changes to the stack and its components.