01667nas a2200133 4500008004100000245006600041210006400107260007900171520119000250653002301440100001801463700001501481856003701496 2018 eng d00aPoster: Image Disguising for Privacy-preserving Deep Learning0 aPoster Image Disguising for Privacypreserving Deep Learning aACM Conference on Computer and Communications Security (CCS) 2018c10/20183 aDue to the high training costs of deep learning, model developers often rent cloud GPU servers to achieve better efficiency. However, this practice raises privacy concerns. An adversarial party may be interested in 1) personal identifiable information encoded in the training data and the learned models, 2) misusing the sensitive models for its own benefits, or 3) launching model inversion (MIA) and generative adversarial network (GAN) attacks to reconstruct repli- cas of training data (e.g., sensitive images). Learning from encrypted data seems impractical due to the large training data and expensive learning algorithms, while differential-privacy based approaches have to make significant trade-offs between privacy and model quality. We investigate the use of image disguising techniques to protect both data and model privacy. Our preliminary results show that with block-wise permutation and transformations, surprisingly, disguised images still give reasonably well performing deep neural networks (DNN). The disguised images are also resilient to the deep-learning enhanced visual discrimination attack and provide an extra layer of protection from MIA and GAN attacks.
10aPrivacy-preserving1 aSharma, Sagar1 aChen, Keke uhttp://www.knoesis.org/node/2919