Noise-resistant sharpness-aware minimization in deep learning

Dan Su, Long Jin*, Jun Wang*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Citation (Scopus)

Abstract

Sharpness-aware minimization (SAM) aims to enhance model generalization by minimizing the sharpness of the loss function landscape, leading to a robust model performance. To protect sensitive information and enhance privacy, prevailing approaches add noise to models. However, additive noises would inevitably degrade the generalization and robustness of the model. In this paper, we propose a noise-resistant SAM method, based on a noise-resistant parameter update rule. We analyze the convergence and noise resistance properties of the proposed method under noisy conditions. We elaborate on experimental results with several networks on various benchmark datasets to demonstrate the advantages of the proposed method with respect to model generalization and privacy protection. © 2024 Elsevier Ltd.
Original languageEnglish
Article number106829
JournalNeural Networks
Volume181
Online published24 Oct 2024
DOIs
Publication statusPublished - Jan 2025

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62176109 and the Hong Kong Research Grants Council under Grant 11203721.

Research Keywords

  • Deep neural networks
  • Noise resistance
  • Sharpness-aware minimization

Fingerprint

Dive into the research topics of 'Noise-resistant sharpness-aware minimization in deep learning'. Together they form a unique fingerprint.

Cite this