ATTSUM : A Deep Attention-Based Summarization Model for Bug Report Title Generation

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1663-1677
Journal / PublicationIEEE Transactions on Reliability
Volume72
Issue number4
Online published24 Jan 2023
Publication statusPublished - Dec 2023

Abstract

Concise and precise bug report titles help software developers to capture the highlights of the bug report quickly. Unfortunately, it is common that bug reporters do not create high-quality bug report titles. Recent long short-term memory (LSTM)-based sequence-to-sequence models such as iTAPE were proposed to generate bug report titles automatically, but the text representation method and LSTM employed in such model are difficult to capture the accurate semantic information and draw the global dependencies among tokens effectively. This article proposes a deep attention-based summarization model (i.e., ATTSUM) to generate high-quality bug report titles. Specifically, the ATTSUM model employs the encoder.decoder framework, which utilizes the robustly optimized bidirectional-encoderr-epresentations-from-transformers approach to encode the bug report bodies to capture contextual semantic information better, the stacked transformer decoder to automatically generate titles, and the copy mechanism to handle the rare token problem. To validate the effectiveness of ATTSUM, we conduct automatic and manual evaluations on 333563 “< body, title >” pairs of bug reports and perform a practical analysis of its ability to improve low-quality titles. The result shows that ATTSUM is superior to the state-of-the-art baselines by a substantial margin both on automatic evaluation metrics (e.g., by 3.4%–58.8% and 7.7%–42.3% in terms of recall-oriented understudy for gisting evaluation in F1 and bilingual evaluation understudy, separately) and three human-set modalities (e.g., by 1.9%–57.5%). Moreover, we analyze the impact of the training data size on ATTSUM and the results imply that our approach is robust enough to generate much better titles. © 2023 IEEE.

Research Area(s)

  • Bug reports, deep learning, text summarization, title generation, transformers