The major cloud providers all offer services for building artificial intelligence software on their platforms, but some enterprises still opt to run their projects in-house. NetApp Inc. today introduced a specialized new system to target this market. Ontap AI combines the storage supplier’s A800 flash array with the DGX-1, a specialized server built by Nvidia Corp. for running AI workloads. The latter machine is based on the chipmaker’s graphic processing units, which lend themselves well to running AI models. The DGX-1 packs no fewer than 8 Nvidia V100 GPUs and 256 gigabytes of GPU memory in a chassis that sells for a hefty $129,000. According to Nvidia, the system can provide over a petaflop of computational power. One petaflop equals a quadrillion floating-point operations per second, a standard unit of computer performance. NetApp’s A800 array packs a punch as well. The system can be equipped with as many as 79 petabytes of flash storage, which translates into a maximum effective capacity of up to 258.3 petabytes. NetApp claims that the system is capable of carrying out 1.3 million input/output operations per second with latency of below 500 microseconds. This enables the array to quickly feed large volumes of data into AI models running on the DGX-1. According to NetApp, the fact that Ontap AI integrates the two systems in one solution makes it easier for companies to set up a deployment. It also simplifies scaling, with NetApp claiming that the solution can support a 1:5 ratio between storage and compute. Ontap AI is not without competition. Rival flash storage maker Pure… [Read full story]