Context-Aware Fuzzing for Robustness Enhancement of Deep Learning Models

Haipeng Wang, Zhengyuan Wei, Qilin Zhou, Wing-Kwong Chan*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

In the testing-retraining pipeline for enhancing the robustness property of deep learning (DL) models, many state-of-the-art robustness-oriented fuzzing techniques are metric-oriented. The pipeline generates adversarial examples as test cases via such a DL testing technique and retrains the DL model under test with test suites that contain these test cases. On the one hand, the strategies of these fuzzing techniques tightly integrate the key characteristics of their testing metrics. On the other hand, they are often unaware of whether their generated test cases are different from the samples surrounding these test cases and whether there are relevant test cases of other seeds when generating the current one. We propose a novel testing metric called Contextual Confidence (CC). CC measures a test case through the surrounding samples of a test case in terms of their mean probability predicted to the prediction label of the test case. Based on this metric, we further propose a novel fuzzing technique Clover as a DL testing technique for the pipeline. In each fuzzing round, Clover first finds a set of seeds whose labels are the same as the label of the seed under fuzzing. At the same time, it locates the corresponding test case that achieves the highest CC values among the existing test cases of each seed in this set of seeds and shares the same prediction label as the existing test case of the seed under fuzzing that achieves the highest CC value. Clover computes the piece of difference between each such pair of a seed and a test case. It incrementally applies these pieces of differences to perturb the current test case of the seed under fuzzing that achieves the highest CC value and to perturb the resulting samples along the gradient to generate new test cases for the seed under fuzzing. Clover finally selects test cases among the generated test cases of all seeds as even as possible and with a preference to select test cases with higher CC values for improving model robustness. The experiments show that Clover outperforms the state-of-the-art coverage-based technique Adapt and loss-based fuzzing technique RobOT by 67%–129% and 48%–100% in terms of robustness improvement ratio, respectively, delivered through the same testing-retraining pipeline. For test case generation, in terms of numbers of unique adversarial labels and unique categories for the constructed test suites, Clover outperforms Adapt by 2.0× and 3.5× and RobOT by 1.6× and 1.7× on fuzzing clean models, and also outperforms Adapt by 3.4× and 4.5× and RobOT by 9.8× and 11.0× on fuzzing adversarially trained models, respectively.

© 2024 Copyright held by the owner/author(s).
Original languageEnglish
Article number8
JournalACM Transactions on Software Engineering and Methodology
Volume34
Issue number1
Online published24 Jul 2024
DOIs
Publication statusPublished - 31 Dec 2024

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Funding

CityU MF_EXT Grant

Research Keywords

  • context-awareness
  • fuzzing algorithm
  • robustness
  • assessment
  • metric

Fingerprint

Dive into the research topics of 'Context-Aware Fuzzing for Robustness Enhancement of Deep Learning Models'. Together they form a unique fingerprint.

Cite this