ShadowBug : Enhanced Synthetic Fuzzing Benchmark Generation
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 95-106 |
Journal / Publication | IEEE Open Journal of the Computer Society |
Volume | 5 |
Online published | 19 Mar 2024 |
Publication status | Published - 2024 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85188430280&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(021c5ea6-519c-4649-9242-2cf9dedae351).html |
Abstract
Fuzzers have proven to be a vital tool in identifying vulnerabilities. As an area of active research, there is a constant drive to improve fuzzers, and it is equally important to improve benchmarks used to evaluate their performance alongside evolving heuristics. Current research has primarily focused on using CVE bugs as benchmarks, with synthetic benchmarks receiving less attention due to concerns about overfitting specific fuzzing heuristics. In this paper, we introduce ShadowBug, a new methodology that generates enhanced synthetic bugs. In contrast to existing synthetic benchmarks, our approach involves well-arranged bugs that fit specific distributions by quantifying the constraint-solving difficulty of each block. We also uncover implicit constraints of real-world bugs that prior research has overlooked and develop an integer-overflow-based transformation from normal constraints to their implicit forms. We construct a synthetic benchmark and evaluate it against five prominent fuzzers. The experiments reveal that 391 out of 466 bugs were detected, which confirms the practicality and effectiveness of our methodology. Additionally, we introduce a finer-grained evaluation metric called 'bug difficulty,' which sheds more light on their heuristic strengths with regard to constraint-solving and bug exploitation. The results of our study have practical implications for future fuzzer evaluation methods.
© 2024 The Authors.
© 2024 The Authors.
Research Area(s)
- Bug injection, fuzzing benchmark, symbolic execution, synthetic bug
Citation Format(s)
ShadowBug: Enhanced Synthetic Fuzzing Benchmark Generation. / Zhou, Zhengxiang; Wang, Cong.
In: IEEE Open Journal of the Computer Society, Vol. 5, 2024, p. 95-106.
In: IEEE Open Journal of the Computer Society, Vol. 5, 2024, p. 95-106.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available