NVIDIA has announced its latest comprehensive SDK for the world’s most advanced system for embedded visual computing, NVIDIA Jetson TX1.

Free for download, NVIDIA JetPack 2.3 builds on the already accessible and high performing platform for deep learning, adding 2x the speed and efficiency.

Not only does the SDK come with improved system software, tools, optimized libraries and APIs, it also provides developers with real world examples of use to quickly move forward with innovative designs.

Key features in this release include:

TensorRT: Formerly known as GIE, TensorRT is a deep learning inference engine that maximizes runtime performance for applications such as image classification, segmentation and object detection. This enables developers to deploy real-time neural networks powered on Jetson. It offers double the performance for deep learning over previous implementations of cuDNN.

cuDNN 5.1: A CUDA-accelerated library for deep learning that provides highly tuned implementations for standard routines such as convolutions, activation functions and tensor transformations. Support for advanced networks models such as LSTM and RNN have also been included in this release.

CUDA 8: The latest release includes updated host compiler support of GCC 5.x and the NVCC CUDA compiler has been optimized for up to 2x faster compilation. CUDA 8 also includes nvGRAPH, an accelerated library for graph analytics. New APIs for half-precision floating point computation in CUDA kernels and cuBLAS and cuFFT libraries have also been added.