计算机体系结构,并行与分布式计算

Integration, the VLSI Journal

Special Issue on Hardware Acceleration for Machine Learning

摘要截稿:

全文截稿: 2019-01-15

影响因子: 0.906

期刊难度:

CCF分类: C类

中科院JCR分区:

• 大类 : 工程技术 - 4区

• 小类 : 计算机：硬件 - 4区

• 小类 : 工程：电子与电气 - 4区

Overview

Many machine learning (ML) workloads, especially those related to deep neural networks, are both computation and memory intensive. Hardware accelerators are essential to ensure that such ML applications can be accelerated to not only meet the performance and throughput targets but also power and energy efficiency requirements. In this special issue of Integration, the VLSI Journal, we call for the most advanced research results on hardware acceleration of machine learning for both training and inference. Topics of interest include (but are not limited to) the following:

Software/Compilers/Tools for mapping ML workloads to accelerators

New design methodologies for ML-centric or ML-aware hardware accelerators

New microarchitecture designs of hardware accelerators for ML

ML workload acceleration on existing accelerators such as GPU, FPGA, CGRA, or ASIC

Accelerators for new ML algorithms such as adversarial learning, transfer learning, etc.