Deep Learning using PyTorch

Soumith Chintala is a Researcher at Facebook AI Research. He works on deep learning and high performance computing

Presentation Description

Overview:

This talk will cover PyTorch, a new deep learning framework that enables new-age A.I. research using dynamic computation graphs. It looks at the architecture of PyTorch and discusses some of the reasons for key decisions in designing it and subsequently look at the resulting improvements in user experience and performance.

Learning Outcomes:

You will learn about deep learning using the PyTorch framework

You will learn about the similarities and differences between PyTorch, Numpy, and TensorFlow

You will learn about the current and future state of Artificial Intelligence

You will know about several tools for AI development

Talk Timeline

0:00 - 2:25: Introduction and Overview

2:25 - 10:15: Ndarray Library

Pytorch is a Ndarry Library

Example of Numpy vs PyTorch

Numpy and PyTorch have a lot of similarities

PyTorch has a library library

Numpy bridge

Seamless GPU Tensors

Benefits of PyTorch is that GPUs are fast.

10:15 - 19:18: Automatic Differentiation Engine

Deep learning frameworks provide gradient computation

Provide integration with high-performance DL libraries like CuDNN

PyTorch uses Autograd for tape-based auto differentiation

Tensorflow vs PyTorch side by side comparison

Key difference between TF and PyT is that you don’t have to construct graph ahead of time

19:18 - 30:54: Motivation Behind Design of Torch.autrograd

Dense captioning system and DeepMask examples

Static datasets + Static model structure/Offline Learning

Implementation of Neural Networks in Video Games

Self-adding new memory or layers changing evaluation path based on inputs

There’s a huge need for a dynamic auto-diff

30:54 - 37:04: Questions

How computationally expensive is it to change the graph dynamically?

What is driving the changes in the graph are you optimizing the loss?

Are you able to optimize the computation of the graph in any way?

Is there plans to have higher level libraries that use PyTorch as a backend?