Winter School

Prof. Ce Zhu (University of Electronic Science and Technology of China, China)

Lecture 2
Substitute Training for Black-Box Adversarial Attacks: A GAN-based Approach without any Real Training Data
Abstract
Abstract: Recent study shows machine learning models are readily vulnerable to adversarial attacks. Substitute attacks, typically black-box ones, employ pre-trained models to generate adversarial examples. It is generally accepted that substitute attacks need to acquire a large amount of real training data combined with model-stealing methods to obtain a substitute model. However, the real training data may be difficult (if not impossible) to be obtained for some practical tasks, e.g., in medical or financial sectors. As the first trial study, the talk will present our proposed model-stealing method that does not require any real training data. The method develops specially designed generative adversarial networks (GANs) for substitute training. The experimental results demonstrate that the substitute models produced by the proposed method without any real training data can achieve competitive performance against the baseline models trained by the same training set as in the attacked models.