We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome,
Firefox,
Internet Explorer 11,
Safari. Thank you!

Xilinx Data Center AI Platform

For low-latency AI Inference, Xilinx delivers the highest throughput at the lowest latency. In standard benchmark tests on GoogleNet V1, The Xilinx Alveo U250 platform delivers more than 4x the throughput of the fastest existing GPU for real-time inference. Learn more in the whitepaper: Accelerating DNNs with Xilinx Alveo Accelerator Cards

Xilinx Edge AI Platform

AI Inference performance leadership with CNN pruning technology.

5X to 50X network performance optimization

Increase FPS and reduces power

Optimization/Acceleration Compiler Tools

Supports networks form Tensorflow, Caffe, and MXNet

Compiles networks to optimized Xilinx Edge runtime

Lowest Latency AI Inference

High Throughput OR Low Latency

Achieves throughput using high-batch size. Must wait for all inputs to be ready before processing, resulting in high latency.

High Throughput AND Low Latency

Whole App Acceleration

Optimized hardware acceleration of both AI inference and other performance-critical functions by tightly coupling custom accelerators into a dynamic architecture silicon device.

This delivers end-to-end application performance that is significantly greater than a fixed-architecture AI accelerator like a GPU; because with a GPU, the other performance-critical functions of the application must still run in software, without the performance or efficiency of custom hardware acceleration.