HPC workloads benefit tremendously from high throughput communication between GPU accelerators in a server. Workloads common in Machine Learning and Artificial Intelligence require frequent memory transfers between GPUs and perform best with minimal latency and maximum device to device bandwidth. GPUs packaged with NVIDIA NVLinkTM interconnect technology have a total of 150GB/s unidirectional (300GB/s bidirectional) bandwidth between GPU accelerators - nearly 10 times the bandwidth of GPU accelerators packaged in standard PCIe form factors. TYAN’s Thunder HX TA88-B7107 takes full advantage of the NVIDIA NVLink technology, offering eight NVIDIA Tesla V100 SXM2 GPU accelerators packed within a 2U server enclosure. With four PCIe x16 slots available for high-speed networking and 24 DIMM slots supporting up to 3TB of system RAM, the TA88-B7107 is the highest performance GPU server option available.

TYAN is also exhibiting standard PCIe GPU servers with support for the new NVIDIA Tesla V100 32 GB with double the memory capacity, P40, and P4 PCIe GPU accelerators. This includes a pair of 4U server systems - the Thunder HX FT77D-B7109 with support for up to eight GPUs for massively parallel workloads such as scientific computing and large-scale facial recognition, and the Thunder HX FA77-B7119 with support for up to ten GPUs within a single server enclosure is ideal for running multiple jobs in parallel in a virtualized environment.

The Intel® Xeon® Scalable Processor-based Thunder HX GA88-B5631 and AMD EPYC™ processor-based Transport HX GA88-B8021 both feature support for up to four NVIDIA Tesla V100 32 GB GPUs within a 1U server and are the industry's highest density GPU servers available on the market. Both platforms offer an additional PCIe x16 slot next to the GPU cards to accommodate high-speed networking adapters up to 100Gb/s such as EDR InfiniBand or 100 Gigabit Ethernet. These platforms are ideal for Artificial Intelligence, Machine Learning, and Deep Neural Network workloads. Additionally, the GA88-B8021 can support up to six NVIDIA Tesla P4 GPU accelerators for inferencing applications.

“AI is transforming every industry by enabling more accurate decisions to be made based on the massive amounts of data being collected. By providing an efficient GPU computing platform to our customers, TYAN’s leading portfolio of GPU server platforms are based on the latest NVIDIA Tesla technology and are optimized to deliver faster overall performance, greater efficiency, and lower energy and cost per unit of computation for the AI revolution,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit.

“The NVIDIA GPU computing platform is the engine for modern AI, accelerating all major deep learning frameworks,” said Paresh Kharya, Product Marketing Manager for the Accelerated Computing Group at NVIDIA. “Tesla V100 32GB GPUs now available in TYAN servers provide twice the memory capacity to drive up to 50% faster results on deeper and more accurate AI models.”

** NVIDIA and Tesla are registered trademarks of NVIDIA Corporation in the United States and other countries.

About TYANTYAN, as a leading server brand of MiTAC Computing Technology Corporation under the MiTAC Group (TSE:3706), designs, manufactures and markets advanced x86 and x86-64 server/workstation board technology, platforms and server solution products. Its products are sold to OEMs, VARs, System Integrators and Resellers worldwide for a wide range of applications. TYAN enables its customers to be technology leaders by providing scalable, highly-integrated, and reliable products for a wide range of applications such as server appliances and solutions for HPC, hyper-scale/data center, server storage and security appliance markets. For more information, visit MiTAC’s website at http://www.mic-holdings.com or TYAN’s website at http://www.tyan.com