This white paper provides an overview of how different NDT techniques can be modeled and simulated, highlighting the need for modern CAE tools that enable an efficient exploration of all variables involved.

By DE Editors

September 18, 2019

PNY Technologies has been NVIDIA’s channel partner across NALA and EMEAI for over 15 years. With the release of the new NVIDIA-Powered Data Science Workstation specification, PNY delivers the components workstation vendors need to create new workstations, helping engineers or data scientists use artificial intelligence for product development.

The specification begins with dual NVIDIA Quadro RTX graphics processing units (GPUs), based on the Turing GPU architecture. Each RTX 8000 supplied by PNY has 48GB of GPU memory (RTX 6000 has 24GB), required for large data sets typical of artificial intelligence (AI) training or deep learning and machine learning analysis. The NVIDIA GV100, a Volta class GPU also available from PNY, may also be used in a data science workstation.

Quadro RTX GPUs use two new types of compute cores, RT Cores and Tensor Cores. RT is short for ray tracing but could also refer to real-time; these cores are specialized for high-performance, local visualization. RT Cores significantly speed up the process of data science analysis visualization.

Tensor Cores, available with Quadro RTX or the GV100, specialize in matrix math, common to deep learning and some applications in other fields that now run only on high-performance computing (HPC) clusters or cloud computing platforms. Tensor Cores are the key to high-speed calculating for artificial intelligence R&D.

Tensor Cores perform a fused multiply add, where two 4x4 FP16 matrices are multiplied, and the result added to a 4x4 16-digit Floating Point (FP16) or FP32 matrix. Tensor Cores do millions of these calculations every second, much faster than commodity CPU or GPU compute circuitry. There is a specific advantage in the Tensor Core’s ability to accumulate results in FP32. According to NVIDIA scientists, 32-bit accumulation is a crucial aspect of network convergence with AI research. The theoretical performance boost of using tensor cores is 8x; in day-to-day use, the final throughput is generally a 4x speed increase. Data science models often take several days to run; a 4x speed increase would complete a four-day job in one day.

Several vendors are now shipping new NVIDIA-Powered Data Science Workstations using PNY components to meet NVIDIA’s standard, including AMAX, BOXX, COLFAX, EXXACT, Image & Technologie, OSS, RAVE Computer, and THINKMATE. “AI offers a tremendous market and substantial competitive advantage,” says Carl Flygare, the Quadro Product Marketing Manager at PNY Technologies. By following the NVIDIA specification, Carl says that “select PNY partners can offer a certified and turnkey fully equipped with the best hardware and a full stack of AI and Data Science tools right out of the box.”

In each workstation, up to four NVIDIA GPUs are linked using four-way NVIDIA NVLink architecture (also supplied by PNY). This configuration delivers 500 teraFLOPS of power, equivalent to hundreds of typical servers.

Data science processes are similar from one task to the next. Data must be “wrangled,” formally known as ETL for Extract Transform Load. From initial ETL exploration, the data scientist builds a model of how the data will be used. This is the training part, used for inference and prediction. Training is time consuming; inference is fast.

The Data Science Workstation specification calls for Canonical Ubuntu Linux 18.04, nicknamed Bionic Beaver, as the operating system. Along with Ubuntu comes a set of software libraries based on the NVIDIA CUDA-X AI protocol for AI research. The collection includes RAPIDS, TensorFlow, PyTorch and Caffe open source libraries and several NVIDIA-written acceleration libraries for machine learning, AI and deep learning.

The price of a Data Science Workstation varies depending on the manufacturer and the exact options selected.

The detailed design process is complex and requires time, effort, and expertise to tackle efficiently. Visualization and simulation have become key to many organizations, but until now both required too much time to truly influence the early stages of design.