Delving into Parameter-Efficient Fine-Tuning in Code Change Learning : An Empirical Study
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 IEEE International Conference on Software Analysis, Evolution and Reengineering |
Subtitle of host publication | SANER 2024 |
Publisher | Institute of Electrical and Electronics Engineers, Inc. |
Pages | 465-476 |
ISBN (electronic) | 9798350330663 |
ISBN (print) | 979-8-3503-3067-0 |
Publication status | Published - 2024 |
Publication series
Name | Proceedings - IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER |
---|---|
ISSN (Print) | 1534-5351 |
ISSN (electronic) | 2640-7574 |
Conference
Title | IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2024) |
---|---|
Place | Finland |
City | Rovaniemi |
Period | 12 - 15 March 2024 |
Link(s)
Abstract
Compared to Full-Model Fine-Tuning (FMFT), Parameter Efficient Fine-Tuning (PEFT) has demonstrated superior performance and lower computational overhead in several code understanding tasks, such as code summarization and code search. This advantage can be attributed to PEFT's ability to alleviate the catastrophic forgetting issue of Pre-trained Language Models (PLMs) by updating only a small number of parameters. As a result, PEFT effectively harnesses the pre-trained general-purpose knowledge for downstream tasks. However, existing studies primarily involve static code comprehension, aligning with the pre-training paradigm of recent PLMs and facilitating knowledge transfer, but they do not account for dynamic code changes. Thus, it remains unclear whether PEFT outperforms FMFT in task-specific adaptation for code-change-related tasks. To address this question, we examine two prevalent PEFT methods, namely Adapter Tuning (AT) and Low-Rank Adaptation (LoRA), and compare their performance with FMFT on five popular PLMs. Specifically, we evaluate their performance on two widely-studied code-change-related tasks: Just-In-Time Defect Prediction (JIT-DP) and Commit Message Generation (CMG). The results demonstrate that both AT and LoRA achieve state-of-the-art (SOTA) results in JIT-DP and exhibit comparable performances in CMG when compared to FMFT and other SOTA approaches. Furthermore, AT and LoRA exhibit superiority in cross-lingual and low-resource scenarios. We also conduct three probing tasks to explain the efficacy of PEFT techniques on JIT-DP and CMG tasks from both static and dynamic perspectives. The study indicates that PEFT, particularly through the use of AT and LoRA, offers promising advantages in code-change-related tasks, surpassing FMFT in certain aspects. This research contributes to a deeper understanding of the capabilities of PEFT in leveraging pre-trained PLMs for dynamic code changes. The replication package is available at https://github.com/ishuoliu/PEFT4CC. © 2024 IEEE.
Research Area(s)
- Adapter Tuning, Code Change, Low-Rank Adaptation, Pre-trained Language Models
Citation Format(s)
Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study. / Liu, Shuo; Keung, Jacky; Yang, Zhen et al.
Proceedings - 2024 IEEE International Conference on Software Analysis, Evolution and Reengineering: SANER 2024. Institute of Electrical and Electronics Engineers, Inc., 2024. p. 465-476 (Proceedings - IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER).
Proceedings - 2024 IEEE International Conference on Software Analysis, Evolution and Reengineering: SANER 2024. Institute of Electrical and Electronics Engineers, Inc., 2024. p. 465-476 (Proceedings - IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review