When PyTorch was first launched in early 2017, it quickly became a popular choice among the researchers of Artificial Intelligence(AI), who due to its flexible, dynamic programming environment and user-friendly interface found it ideal for rapid experimentation. And the community has continued to grow quickly ever since.

As a result of which, Today, PyTorch is the second-fastest-growing open source project on GitHub, with a 2.8x increase in contributors over the past 12 months.

Release of Pytorch 1.0 Stable:

And as thePyTorch ecosystem and community continues to grow with interesting new projects and educational resources for developers, at the NeurIPS conference adding to it even further, PyTorch 1.0 stable was released.

The latest version, which was first shared in a preview release during the PyTorch Developer Conference in October this year, includes capabilities such as production-oriented features, support from major cloud platforms and a lot more.

Why Pytorch 1.0 stable?

With this version, the Researchers and engineers can now readily take full advantage of the open source Deep Learning framework’s new features, including revamped distributed training, a hybrid front end for transitioning seamlessly between eager and graph execution modes, a pure C++ front end for high-performance research, as well as deep integration with cloud platforms.

Apart from this, PyTorch 1.0 accelerates the workflow involved in taking AI from research prototyping to production deployment, that as a result makes it not only easier but also more accessible to get started with!

Now, let’s take a look at the highlights and all that this version presents itself with:

Highlights of the version:

JIT: A set of compiler tools for bridging the gap between research in PyTorch
and production

Brand New Distributed Package

C++ Frontend [API Unstable]

Torch Hub

Additional New Features:

N-dimensional empty tensors

New Operators

New Distributions

Sparse API Improvements

Additions to existing Operators and Distributions

Bug Fixes:

Serious

Backwards Compatibility

Correctness

Error checking

Miscellaneous

Moving further, Taking a look at a few new projects that extend PyTorch:

PyTorch has been applied to various use cases that range from image recognition to machine translation. As a result, a wide variety of projects from the developer community come into view that extend and support development.

The teams at Facebook are also building and open-sourcing projects for PyTorch such as Translate, a library for training sequence-to-sequence models that are based on Facebook’s machine translation systems. Among many others, given below are few of such projects:

Horovod:Horovod is a distributed training framework that makes it easy for developers to take a single-GPU program and on multiple GPUs train it quickly.

PyTorch Geometry: A geometric computer vision library for PyTorch that provides a set of routines as well as differentiable modules.

TensorBoardX: A module for logging PyTorch models to TensorBoard, allowing developers to make the use of the visualization tool for model training.

And For AI developers who are looking to jump-start their work in a specific area?
The ecosystem of supported projects provides easy access to some of the industry’s latest cutting-edge research.

With an approach like this, the community continues to grow and enable developers to more easily learn how to build, train, and deploy machine learning models with PyTorch through various programs and platforms.