Workstation AI

Ubuntu-certified workstations from Dell and HP with NVIDIA, microk8s and Kubeflow

Accelerate data science

Lightest footprint

Laptop to workstation

GPGPU optional

Develop and test AI

Bare metal AI

Kubernetes on bare metal with NVIDIA GPGPU acceleration

Highest performance

On-premises with local data

Hardware recommendations

Fully managed options

Google Cloud AI

GKE on Ubuntu with NVIDIA GPGPU acceleration

Effectively infinite scale

Portable workloads

Fastest cloud ML

Canonical Cloud AI

Kubeflow on Kubernetes on Openstack with NVIDIA GPGPU acceleration

Maximize benefits of OpenStack

On-premises with local data

Hardware recommendations

Fully managed options

Kubeflow features

Kubeflow brings together all the most popular tools for machine learning, starting with JupyterHub and Tensorflow, in a standardised workflow running on Kubernetes. Optimised on a wide range of hardware and cloud infrastructure, Kubeflow lets your data scientists focus on the pieces that matter to the business.

It is an extensible framework, which allows you to leverage the tools of your choice. Start with Tensorflow and JupyterHub or bring your own frameworks and tools. Combined with Kubeflow’s automation, this will accelerate your machine learning activities — from model development to model training to model sharing.

Initiated by Google on Ubuntu for perfect portability of AI workloads from your workstation, to your data center rack on Canonical’s bare metal k8s or Canonical’s OpenStack virtualization, to Google’s Cloud Kubernetes service GKE which also runs on Ubuntu. Simple.

Canonical’s Kubeflow and Kubernetes on bare metal servers, with NVIDIA GPGPUs, provides an ultra high-performance machine learning cluster. Deployment, support, and optional remote management and remote operations make it the best way to accelerate your data science and machine learning.

Canonical has provided both a familiar and highly performant operating system that works everywhere. Whether on-premises or in the cloud, software engineers and data scientists can use tools they are already familiar with, such as Ubuntu, Kubernetes and Kubeflow, and greatly accelerate their ability to deliver value for their customers.

David Aronchick, Google Product Manager for Kubeflow

in partnership with

Consulting to get started, Managed Ops to keep you focused

Turn on the taps with a workshop to understand the full stack of machine learning. Build a full pipeline from developer stations to your data center, to the public cloud. Canonical works with the leading companies to ensure you have the widest range of choices. First, start with one of our standard bare metal Kubernetes service packages (Discoverer or Discoverer Plus) and then select the AI Add-on to unlock the benefits of AI on Kubernetes.

IoT and Edge AI

Train in the cloud. Act at the edge.

Cameras, music systems, cars, even firewalls and CPE are becoming smarter. From natural language processing to image recognition, from real-time high-speed autonomous navigation to network intrusion detection. Ubuntu gives you a seamless operational framework for development, training and inference all the way out to the edge.

Leaders in artificial intelligence choose Ubuntu

Organizations are increasingly looking to accelerate their deep learning and AI implementations. In addition to using Ubuntu on our DGX systems, we have been working with Canonical to offer Kubernetes on NVIDIA GPUs as a scalable and portable solution for multi-cloud deep learning training and inference workloads.

Duncan Poole, Director of Platform Alliances at NVIDIA

Partner with us

It takes an open ecosystem to solve the diverse challenges of AI infrastructure across every sector and in every region. Our partners ensure that you have the widest range of capabilities available for automated integration in your cloud, and that you can get insight and support locally.

To learn more about our partners or becoming a Canonical AI partner, please contact us today.