NVIDIA and Arm just announced that they are partnering to bring deep learning inferencing technology to mobile, consumer electronics and the Internet of Things devices. As a result of the partnership, NVIDIA and Arm will integrate NVIDIA’s open-source Deep Learning Accelerator (NVDLA) architecture into Arm’s Project Trillium platform for machine learning.

“Accelerating AI at the edge is critical in enabling Arm’s vision of connecting a trillion IoT devices,” said Rene Haas, executive vice president, and president of the IP Group, at Arm. “Today we are one step closer to that vision by incorporating NVDLA into the Arm Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”

NVIDIA Xavier ARM64 SoC With Volta GPU

NVIDIA

The collaboration is meant to simplify integration of AI into IoT device and chip companies. Arm’s Project Trillium is integral to the Arm Heterogenous ML compute platform, and leverages Arm ML processors, the Arm object detection (OD) processor, and open-source Arm NN software. NVIDIA’s NVDLA is a free, open architecture meant to promote a standard method to design deep learning inference accelerators. It is based on NVIDIA’s Xavier platform architecture and is scalable, configurable and designed to simplify integration and portability.

“Inferencing will become a core capability of every IoT device in the future,” said Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA. “Our partnership with Arm will help drive this wave of adoption by making it easy for hundreds of chip companies to incorporate deep learning technology.”

NVDLA is supported by many of NVIDIA’s developer tools, including upcoming versions of TensorRT, which is high-performance deep learning inference optimizer and runtime. The open-source nature of NVDLNA and other open-source projects allows for features to be added regularly, including those contributed by others in the research community.