A study on the impact of pre-trained model on Just-In-Time defect prediction

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Author(s)

  • Yuxiang Guo
  • Xiaopeng Gao
  • Zhenyu Zhang
  • W.K. Chan
  • Bo Jiang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security, QRS 2023
Place of PublicationLos Alamitos, Calif.
PublisherInstitute of Electrical and Electronics Engineers, Inc.
Pages105-116
ISBN (electronic)9798350319583
ISBN (print)9798350319590
Publication statusPublished - 2023

Publication series

NameIEEE International Conference on Software Quality, Reliability and Security, QRS
ISSN (Print)2693-9185
ISSN (electronic)2693-9177

Conference

Title23rd IEEE International Conference on Software Quality, Reliability, and Security (QRS 2023)
LocationChiang Mai Marriott Hotel
PlaceThailand
CityChiang Mai
Period22 - 26 October 2023

Abstract

Previous researchers conducting Just-In-Time (JIT) defect prediction tasks have primarily focused on the performance of individual pre-trained models, without exploring the relationship between different pre-trained models as backbones. In this study, we build six models: RoBERTaJIT, CodeBERTJIT, BARTJIT, PLBARTJIT, GPT2JIT, and CodeGPTJIT, each with a distinct pre-trained model as its backbone. We systematically explore the differences and connections between these models. Specifically, we investigate the performance of the models when using Commit code and Commit message as inputs, as well as the relationship between training efficiency and model distribution among these six models. Additionally, we conduct an ablation experiment to explore the sensitivity of each model to inputs. Furthermore, we investigate how the models perform in zero-shot and few-shot scenarios. Our findings indicate that each model based on different backbones shows improvements, and when the backbone's pre-training model is similar, the training resources that need to be consumed are closer. We also observe that Commit code plays a significant role in defect detection, and different pre-trained models demonstrate better defect detection ability with a balanced dataset under few-shot scenarios. These results provide new insights for optimizing JIT defect prediction tasks using pre-trained models and highlight the factors that require more attention when constructing such models. Additionally, CodeGPTJIT and GPT2JIT achieved better performance than DeepJIT and CC2Vec on the two datasets respectively under 2000 training samples. These findings emphasize the effectiveness of transformer-based pre-trained models in JIT defect prediction tasks, especially in scenarios with limited training data. © 2023 IEEE.

Research Area(s)

  • few-shot scenario, Just-In-time defect prediction, model sensitivity, pre-trained model

Citation Format(s)

A study on the impact of pre-trained model on Just-In-Time defect prediction. / Guo, Yuxiang; Gao, Xiaopeng; Zhang, Zhenyu et al.
Proceedings - 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security, QRS 2023. Los Alamitos, Calif.: Institute of Electrical and Electronics Engineers, Inc., 2023. p. 105-116 (IEEE International Conference on Software Quality, Reliability and Security, QRS).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review