Design competition

FPT 2017 will follow the tradition of previous conferences of having an FPGA design competition associated with the conference.

The design competition has changed this year (from Trax) to: FPGA-accelerated Deep Learning

Deep learning, machine learning, and neural network research has seen ground-breaking algorithmic breakthroughs over the past few years.

This has implications at various scales from implementing and designing high-performance systems to embedded low-power realizations in edge devices.

GPUs are currently popular for training these neural networks on large datasets, while ASICs/custom chips such as the Google TPU are emerging as an interesting implementation of the inference flow.

Not to be left behind, FPGA realization of these neural networks for Bing Image search and the Xilinx Pynq BNN (binary neural nets) code have shown promising initial results.

To kick-start further hardware innovation in this field, we will hold a Machine Learning competition at FPT 2017 this year in Melbourne.

Submissions may be in two broad categories:

Training, or Inference on any Caffe2 Image Classification model on the ImageNet/AlexNet dataset.

Acceleration on any FPGA board of your choosing. Examples include Xilinx Pynq Z1 or Zedboard, Altera DE1-SoC boards. We expect to have some boards available at the conference (TBC), but you may BYOB (bring your own board).

We will setup a test infrastructure with reference server for validating runtime and accuracy, as well as a power measurement unit.

Submissions will be scored using a function that combines runtime, area usage, power, and accuracy.

A leaderboard with scores for all teams will help determine the winning entry.

Exact details of the testing infrastructure will be provided in due course.

A four page (max) paper must accompany the submission. This will be reviewed and included in the conference papers submitted to IEEEXplore.