TY - JOUR
T1 - Adversarially Robust Neural Architectures
AU - Dong, Minjing
AU - Li, Yanxi
AU - Wang, Yunhe
AU - Xu, Chang
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2025/5
Y1 - 2025/5
N2 - Deep Neural Networks (DNNs) are vulnerable to adversarial attacks. Existing methods are devoted to developing various robust training strategies or regularizations to update the weights of the neural network. But beyond the weights, the overall structure and information flow in the network are explicitly determined by the neural architecture, which remains unexplored. Thus, this paper aims to improve the adversarial robustness of the network from the architectural perspective. We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness. The importance of architecture parameters could vary from operation to operation or connection to connection. We approximate the Lipschitz constant of the entire network through a univariate log-normal distribution, whose mean and variance are related to architecture parameters. The confidence can be fulfilled through formulating a constraint on the distribution parameters based on the cumulative function. Compared with adversarially trained neural architectures searched by various NAS algorithms as well as efficient human-designed models, our algorithm empirically achieves the best performance among all the models under various attacks on different datasets. © 2025 IEEE
AB - Deep Neural Networks (DNNs) are vulnerable to adversarial attacks. Existing methods are devoted to developing various robust training strategies or regularizations to update the weights of the neural network. But beyond the weights, the overall structure and information flow in the network are explicitly determined by the neural architecture, which remains unexplored. Thus, this paper aims to improve the adversarial robustness of the network from the architectural perspective. We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness. The importance of architecture parameters could vary from operation to operation or connection to connection. We approximate the Lipschitz constant of the entire network through a univariate log-normal distribution, whose mean and variance are related to architecture parameters. The confidence can be fulfilled through formulating a constraint on the distribution parameters based on the cumulative function. Compared with adversarially trained neural architectures searched by various NAS algorithms as well as efficient human-designed models, our algorithm empirically achieves the best performance among all the models under various attacks on different datasets. © 2025 IEEE
KW - Adversarial Robustness
KW - Neural Architecture Search
UR - http://www.scopus.com/inward/record.url?scp=85218773705&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85218773705&origin=recordpage
U2 - 10.1109/TPAMI.2025.3542350
DO - 10.1109/TPAMI.2025.3542350
M3 - RGC 21 - Publication in refereed journal
C2 - 40036408
AN - SCOPUS:85218773705
SN - 0162-8828
VL - 47
SP - 4183
EP - 4197
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 5
ER -