Deep Learning with PyTorch in a Jupyter notebook

Last summer, our blog post “GeoMesa analytics in a Jupyter notebook“ described how Jupyter Notebook allows interactive exploration of data using programming languages that are rarely used interactively. It also showed how a series of steps can be saved in a reusable notebook for others to learn from.

CCRi Data Scientist Tim Emerick recently gave a presentation to other CCRi employees about PyTorch, the Python library for working with the open source Torch framework developed to run applications on GPUs. (Graphics Processing Unit processors were originally invented to render graphics more efficiently than regular CPUs on gaming machines such as the PlayStation 4 and the Xbox One. Machine learning researchers have since found that their optimized handling of matrix math makes GPUs ideal for deep learning work, especially when you consider the price of these mass-produced processors compared with other specialized processors.)

CCRi developers have done a lot of development for GPUs, but mostly with tools designed for use with the C and C++ programming languages and with the Lua programming language, which is specialized for use with embedded processors. A Python-based tool gives a much broader range of developers the ability to develop deep learning applications on specialized processors.

In Tim’s presentation, he used a bit of math, some nice diagrams, and plenty of short examples of running code to show us how to use PyTorch to develop deep learning applications. Jupiter made it easy for him to tie this all together in a presentation that he could make available for other people to try the running code. I would describe his notebook in more detail, but you can see it all (and try the code) yourself on his PyTorch lunch and learn notebook on GitHub.