Wang, Yixiang; Liu, Jiqiang; Chang, Xiaolin; Wang, Jianhua; Rodríguez, Ricardo J.
AB-FGSM: AdaBelief Optimizer and FGSM-Based Approach to Generate Adversarial Examples Journal Article
In: Journal of Information Security and Applications, vol. 68, pp. 103227, 2022, ISSN: 2214-2126.
Abstract | Links | BibTeX | Tags: adversarial examples, deep learning, generalization, optimization, Security, Transferability
@article{WLCWR-JISA-22,
title = {AB-FGSM: AdaBelief Optimizer and FGSM-Based Approach to Generate Adversarial Examples},
author = {Yixiang Wang and Jiqiang Liu and Xiaolin Chang and Jianhua Wang and Ricardo J. Rodríguez},
url = {http://webdiis.unizar.es/~ricardo/files/papers/WLCWR-JISA-22.pdf},
doi = {10.1016/j.jisa.2022.103227},
issn = {2214-2126},
year = {2022},
date = {2022-08-01},
journal = {Journal of Information Security and Applications},
volume = {68},
pages = {103227},
abstract = {Deep neural networks (DNNs) can be misclassified by adversarial examples, which are legitimate inputs integrated with imperceptible perturbations at the testing stage. Extensive research has made progress for white-box adversarial attacks to craft adversarial examples with a high success rate. However, these crafted examples have a low success rate in misleading black-box models with defensive mechanisms. To tackle this problem, we design an AdaBelief based iterative Fast Gradient Sign Method (AB-FGSM) to generalize adversarial examples. By integrating the AdaBelief optimizer into the iterative-FGSM (I-FGSM), the generalization of adversarial examples is boosted, considering that the AdaBelief method can find the transferable adversarial point in the ε ball around the legitimate input on different optimization surfaces. We carry out white-box and black-box attacks on various adversarially trained models and ensemble models to verify the effectiveness and transferability of the adversarial examples crafted by AB-FGSM. Our experimental results indicate that the proposed AB-FGSM can efficiently and effectively craft adversarial examples in the white-box setting compared with state-of-the-art attacks. In addition, the transfer rate of adversarial examples is 4% to 21% higher than that of state-of-the-art attacks in the black-box manner.},
keywords = {adversarial examples, deep learning, generalization, optimization, Security, Transferability},
pubstate = {published},
tppubtype = {article}
}
Deep neural networks (DNNs) can be misclassified by adversarial examples, which are legitimate inputs integrated with imperceptible perturbations at the testing stage. Extensive research has made progress for white-box adversarial attacks to craft adversarial examples with a high success rate. However, these crafted examples have a low success rate in misleading black-box models with defensive mechanisms. To tackle this problem, we design an AdaBelief based iterative Fast Gradient Sign Method (AB-FGSM) to generalize adversarial examples. By integrating the AdaBelief optimizer into the iterative-FGSM (I-FGSM), the generalization of adversarial examples is boosted, considering that the AdaBelief method can find the transferable adversarial point in the ε ball around the legitimate input on different optimization surfaces. We carry out white-box and black-box attacks on various adversarially trained models and ensemble models to verify the effectiveness and transferability of the adversarial examples crafted by AB-FGSM. Our experimental results indicate that the proposed AB-FGSM can efficiently and effectively craft adversarial examples in the white-box setting compared with state-of-the-art attacks. In addition, the transfer rate of adversarial examples is 4% to 21% higher than that of state-of-the-art attacks in the black-box manner.