NVIDIA Announces Two New Pascal-Based GPU Accelerators

Its innovative and GPU optimized single root complex PCI-E design is proven to dramatically improve GPU peer-to-peer communication efficiency over QPI and PCI-E links. The new Tesla P40 and P4 GPU accelerators are specifically created to work with Neural Network systems, boosting AI inferencing speed up to 45 times over, and offering a 4x increase over the last generation of GPUs. It fits in any server with its small form-factor and low-power design, which starts at 50 watts, helping make it 40x more energy efficient than CPUs for inferencing in production workloads. Tesla P4’s Pascal GP104 GPU offers high floating point throughput and efficiency, and features optimised instructions for deep learning inference computations. The powerful Tesla P40 GPU clocks in at 12 teraflops for single precision calculation and capable of…