ShadowBug : Enhanced Synthetic Fuzzing Benchmark Generation

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)95-106
Journal / PublicationIEEE Open Journal of the Computer Society
Volume5
Online published19 Mar 2024
Publication statusPublished - 2024

Link(s)

Abstract

Fuzzers have proven to be a vital tool in identifying vulnerabilities. As an area of active research, there is a constant drive to improve fuzzers, and it is equally important to improve benchmarks used to evaluate their performance alongside evolving heuristics. Current research has primarily focused on using CVE bugs as benchmarks, with synthetic benchmarks receiving less attention due to concerns about overfitting specific fuzzing heuristics. In this paper, we introduce ShadowBug, a new methodology that generates enhanced synthetic bugs. In contrast to existing synthetic benchmarks, our approach involves well-arranged bugs that fit specific distributions by quantifying the constraint-solving difficulty of each block. We also uncover implicit constraints of real-world bugs that prior research has overlooked and develop an integer-overflow-based transformation from normal constraints to their implicit forms. We construct a synthetic benchmark and evaluate it against five prominent fuzzers. The experiments reveal that 391 out of 466 bugs were detected, which confirms the practicality and effectiveness of our methodology. Additionally, we introduce a finer-grained evaluation metric called 'bug difficulty,' which sheds more light on their heuristic strengths with regard to constraint-solving and bug exploitation. The results of our study have practical implications for future fuzzer evaluation methods.

© 2024 The Authors.

Research Area(s)

  • Bug injection, fuzzing benchmark, symbolic execution, synthetic bug

Download Statistics

No data available