PyTorch 1.2 brings an improved and more polished TorchScript environment. According to the company, 1.2 makes it even easier to ship production models, expand support for exporting ONNX formatted models, and enhance module level support for Transformers.

Also, now, TensorBoard is no longer experimental. You can use from torch.utils.tensorboard import SummaryWriter to get started.

The new release significantly expands TorchScript’s support for the subset of Python used in PyTorch models. It delivers a new and easier-to-use API for compiling your models to TorchScript.

The new release also features full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5) and the constant folding pass to support Opset 10. Also, ScriptModule now includes support for multiple outputs, tensor factories, and tuples as inputs and outputs.

Also, a bunch of additional PyTorch operators are now supported including the ability to export a custom operator.

Starting with 1.2, Pytorch now includes a standard nn.Transformer module. This module relies entirely on an attention mechanism to draw global dependencies between input and output. The individual components of the module are designed such that they can be adopted independently. Like, the nn.TransformerEncoder can be used by itself, without the larger nn.Transformer.

The new APIs include nn.Transformer, nn.TransformerEncoder and nn.TransformerEncoderLayer and nn.TransformerDecoder and nn.TransformerDecoderLayer