Exploring and Unleashing the Power of Large Language Models in Automated Code Translation

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

  • Fang LIU
  • Zhongxing YU
  • Jia LI
  • Yifan HONG
  • Zhi JIN
  • Ge LI

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number71
Journal / PublicationProceedings of the ACM on Software Engineering
Volume1
Issue numberFSE
Online published12 Jul 2024
Publication statusPublished - Jul 2024

Conference

Title32nd ACM International Conference on the Foundations of Software Engineering (FSE 2024)
Location
PlaceBrazil
CityPorto de Galinhas
Period15 - 19 July 2024

Abstract

Code translation tools, namely transpilers, are developed for automatic source-to-source translation. Latest learning-based transpilers have shown impressive enhancement against rule-based counterparts in both translation accuracy and readability, owing to their task-specific pre-training on extensive monolingual corpora. Nevertheless, their current performance still remains unsatisfactory for practical deployment, and the associated training resources are also prohibitively expensive. Large Language Models (LLMs), pre-trained on huge amounts of human-written code/text, have shown remarkable performance in many code intelligence tasks due to their powerful generality, even without task-specific re-training/fine-tuning. Thus, LLMs can potentially circumvent the above limitations, but they have not been exhaustively explored yet. This paper investigates diverse LLMs and learning-based transpilers for automated code translation tasks, finding that: although certain LLMs have outperformed current transpilers, they still have some accuracy issues, where most of the failures are induced by a lack of comprehension of source programs (38.51%), missing clear instructions on I/O types in translation (14.94%), and ignoring discrepancies between source and target programs (41.38%).
     Enlightened by the above findings, we further propose UniTrans, a Unified code Translation framework, applicable to various LLMs, for unleashing their power in this field. Specifically, UniTrans first crafts a series of test cases for target programs with the assistance of source programs. Next, it harnesses the above auto-generated test cases to augment the code translation and then evaluate their correctness via execution. Afterward, UniTrans further (iteratively) repairs incorrectly translated programs prompted by test case execution results. Extensive experiments are conducted on six settings of translation datasets between Python, Java, and C++. Three recent LLMs of diverse sizes, including GPT-3.5 and LLaMA-13B/7B, are tested with UniTrans, and all achieve substantial improvements in terms of computational accuracy and exact match accuracy among almost all translation settings, showing the universal effectiveness of UniTrans in practice. 
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Research Area(s)

  • Automated Code Translation, Large Language Models, Transformer

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.