Who is this presentation for?

Prerequisite knowledge

Experience creating and deploying ML applications and using Kubernetes (useful but not required)

What you'll learn

Understand why ML is easier with Kubeflow, how to use Kubeflow to run ML on Kubernetes, and what's next for Kubeflow

Description

Practically speaking, some of the biggest challenges facing ML applications are composability, portability, and scalability. The Kubernetes framework is well suited to address these issues, which is why it’s a great foundation for deploying ML products. Kubeflow is designed to take advantage of these benefits.

Kubeflow makes it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and supports the full lifecycle of an ML product, including iteration via Jupyter notebooks. It removes the need for expertise in a large number of areas, reducing the barrier to entry for developing and maintaining ML products. The composability problem is addressed by providing a single, unified tool for running common processes such as data ingestion, transformation, and analysis, model training, evaluation, and serving, as well as monitoring, logging, and other operational tools. The portability problem is resolved by supporting the use of the entire stack either locally, on-premise, or on the cloud platform of your choice. Scalability is native to the Kubernetes platform and leveraged by Kubeflow to run all aspects of the product, including resource-intensive model training tasks.

Michelle Casbon demonstrates how to build a machine learning application with Kubeflow. By providing a platform that reduces variability between services and environments, Kubeflow enables applications that are more robust and resilient, resulting in less downtime, quality issues, and customer impact. Additionally, it supports the use of specialized hardware such as GPUs, which can reduce operational costs and improve model performance. Join Michelle to find out what Kubeflow currently supports and the long-term vision for the project.

Michelle Casbon

Google

Michelle Casbon is a senior engineer on the Google Cloud Platform developer relations team, where she focuses on open source contributions and community engagement for machine learning and big data tools. Michelle’s development experience spans more than a decade and has primarily focused on multilingual natural language processing, system architecture and integration, and continuous delivery pipelines for machine learning applications. Previously, she was a senior engineer and director of data science at several San Francisco-based startups, building and shipping machine learning products on distributed platforms using both AWS and GCP. She especially loves working with open source projects and is a contributor to Kubeflow. Michelle holds a master’s degree from the University of Cambridge.