Training Provably Robust Models by Polyhedral Envelope Regularization

Chen Liu*, Mathieu Salzmann, Sabine Süsstrunk

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

7 Citations (Scopus)

Abstract

Training certifiable neural networks enables us to obtain models with robustness guarantees against adversarial attacks. In this work, we introduce a framework to obtain a provable adversarial-free region in the neighborhood of the input data by a polyhedral envelope, which yields more fine-grained certified robustness than existing methods. We further introduce polyhedral envelope regularization (PER) to encourage larger adversarial-free regions and thus improve the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks of different architectures and with general activation functions. Compared with state of the art, PER has negligible computational overhead; it achieves better robustness guarantees and accuracy on the clean data in various settings.
Original languageEnglish
Pages (from-to)3146-3160
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number6
Online published26 Oct 2021
DOIs
Publication statusPublished - Jun 2023
Externally publishedYes

Research Keywords

  • Adversarial training
  • Computational modeling
  • Predictive models
  • provable robustness.
  • Recurrent neural networks
  • Robustness
  • Smoothing methods
  • Standards
  • Training

Fingerprint

Dive into the research topics of 'Training Provably Robust Models by Polyhedral Envelope Regularization'. Together they form a unique fingerprint.

Cite this