Two years ago, Intel spent $16.7 billion to acquire FPGA chip vendor Altera. So, what’s it going to do with that big purchase? The company is finally ready to say.

A field-programmable gate array, or FPGA, is an integrated circuit that can be customized to perform specific functions. Whereas the x86 executes only the x86 instruction sets, an FPGA can be reprogrammed on the fly to perform specified tasks. That’s why x86s are considered general compute processors and FPGAs are viewed as customizable.

It sounds like FPGA will compete with the Xeon Phi accelerator cards but Intel said that’s not the case. FPGA differs from its Xeon Phi acceleration strategy in that you can get multifunction acceleration with FPGAs vs. specialized acceleration with Phi. So FPGA complements, it does not compete with Phi.

Like the GPU, FPGAs will be used in one of two ways: offload and inline. Offload, also called look aside, means the data coming in first goes through the CPU before being moved to the FPGA for processing. Inline means the CPU stays out of the way and data goes directly in and out of the FPGA for processing.

FPGAs better for certain tasks than Xeon Phi or GPUs

Now Intel is positioning the Altera FPGAs as co-processors and admits they will compete with Xeon Phi in some ways, but that the FPGAs are more versatile and suited for certain tasks better than the Phi or GPUs, according to Bernhard Friebe, senior director of software solutions in the Intel Programmable Solutions Group.

“The advantage for FPGA is GPUs play in some areas but not all, and if you look at the use model of inline vs. offload, they are limited to offload mostly. So, there’s a broader application space you can cover with FPGA,” he said.

The integrated solution provides tight coupling between CPU and FPGA with very high bandwidth, while the external PCI Express card is not as tightly coupled. For ultra-low latency and high-bandwidth applications, integrated is a great fit, Friebe said.

“Most of the differentiation [between integrated and discrete] is due to system architecture and data movement. In a data center environment where [you] run many different workloads, you don’t want to tie it to a particular app,” he said.

The more you do specialization, the more performance you can squeeze out of the accelerator, said Friebe. FPGAs as a multifunction accelerator will achieve great performance in some apps. The nature of the FPGA is highly parallel and programmability, which lends itself to accelerating workloads that can be parallelized. These include data analytics, artificial intelligence (AI) and machine learning, video transcoding, compression, security, financial analysis, and genomics.

Two-pronged FPGA strategy

Intel is taking a two-pronged approach with its FPGA strategy, offering both hybrid CPU-FPGA processors — similar to its desktop CPUs that have a GPU integrated on the die — and discrete Arria or Stratix brand FPGA devices on a PCI Express card.

The hybrid CPU-FPGA device will be based on a Skylake generation CPU and Arria 10 FPGA and will use faster UltraPath Interconnect (UPI) link, Intel’s successor to QuickPath Interconnect (QPI). Not much is known about UPI other than it will operate at 9.6GT/s or 10.4GT/s data transfer rates and will be considerably more efficient than QPI because it will support multiple requests per message.

Intel is also providing a complete developer toolset and APIs to design apps for both integrated and discrete products using the same tools, accelerators and libraries. All are written in OpenCL, a C-like language.

“The beauty is it’s standardized and open source. Their investment is forward-compatible to new-generation processors, easy to migrate, and provides an abstraction for FPGA developers to target a much larger user base,” Friebe said.

Intel is sampling a discrete card, called a Programmable Acceleration Card (PAC), with the Arria 10 GX FPGA now, and it expects availability in the first half of 2018. A Xeon Scalable Platform with the integrated FPGA on a Skylake-generation Xeon is sampling today, with general availability in the second half of 2018.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.