Improving Ranking-Oriented Defect Prediction Using a Cost-Sensitive Ranking SVM

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

50 Scopus Citations
View graph of relations

Author(s)

  • Jin Liu
  • Qing Li
  • Zhou Xu
  • Junping Wang
  • Xiaohui Cui

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)139-153
Journal / PublicationIEEE Transactions on Reliability
Volume69
Issue number1
Online published22 Aug 2019
Publication statusPublished - Mar 2020

Abstract

Context: Ranking-oriented defect prediction (RODP) ranks software modules to allocate limited testing resources to each module according to the predicted number of defects. Most RODP methods overlook that ranking a module with more defects incorrectly makes it difficult to successfully find all of the defects in the module due to fewer testing resources being allocated to the module, which results in much higher costs than incorrectly ranking the modules with fewer defects, and the numbers of defects in software modules are highly imbalanced in defective software datasets. Cost-sensitive learning is an effective technique in handling the cost issue and data imbalance problem for software defect prediction. However, the effectiveness of cost-sensitive learning has not been investigated in RODP models. Aims: In this article, we propose a cost-sensitive ranking support vector machine (SVM) (CSRankSVM) algorithm to improve the performance of RODP models. Method: CSRankSVM modifies the loss function of the ranking SVM algorithm by adding two penalty parameters to address both the cost issue and the data imbalance problem. Additionally, the loss function of the CSRankSVM is optimized using a genetic algorithm. Results: The experimental results for 11 project datasets with 41 releases show that CSRankSVM achieves 1.12%–15.68% higher average fault percentile average (FPA) values than the five existing RODP methods (i.e., decision tree regression, linear regression, Bayesian ridge regression, ranking SVM, and learning-to-rank (LTR)) and 1.08%–15.74% higher average FPA values than the four data imbalance learning methods (i.e., random undersampling and a synthetic minority oversampling technique; two data resampling methods; RankBoost, an ensemble learning method; IRSVM, a CSRankSVM method for information retrieval). Conclusion: CSRankSVM is capable of handling the cost issue and data imbalance problem in RODP methods and achieves better performance. Therefore, CSRankSVM is recommended asan effective method for RODP.

Research Area(s)

  • Cost-sensitive learning, data imbalance, ranking-oriented defect prediction (RODP)