ILFO: Adversarial Attack on Adaptive Neural Networks

Mirazul Haque, Anki Chauhan, Cong Liu, Wei Yang

Abstract

With the increase in the number of layers and parameters in neural networks, the energy consumption of neural networks has become a great concern to society, especially to users of handheld or embedded devices. In this paper, we investigate the robustness of neural networks against energy-oriented attacks. Specifically, we propose ILFO (Intermediate Output Based Loss Function Optimization) at-tack against a type of energy-saving neural networks, Adaptive Neural Networks (AdNN). An AdNN can dynamically deactivate part of its model based on the need of the inputs to decrease energy consumption. ILFO leverage intermediate output as a proxy to infer the relation between input and its corresponding energy consumption. ILFO has shown an increase upto 100 % of the remaining FLOPs (floating-point operations per second) count of AdNNs with minimum noise added to input images. To our knowledge, this is the first attempt to attack the energy consumption of a DNN.