Black-box Attack in Partial Auxiliary Information Setting
Our team designed a black-box transfer attack with a self-supervised auxiliary model. Our method relaxes the assumption that the auxiliary and target models are built on the same training dataset in existing transferred attacks.
What Have We Learned About Black-box Attacks Against Classifiers?
Our team designed a comprehensive platform to facilitate reproducing existing black-box attacks against image and malware classifiers and proposed a general taxonomy of attacks based on the applicable scenarios in practice.