An Inception Architecture-Based Model for Improving Code Readability Classification

Research output: Conference Papers (RGC: 31A, 31B, 32, 33)32_Refereed conference paper (no ISBN/ISSN)

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
StatePublished - Jun 2018

Conference

Title22nd Evaluation and Assessment in Software Engineering Conference (EASE 2018)
LocationUniversity of Canterbury
PlaceNew Zealand
CityChristchurch
Period28 - 29 June 2018

Abstract

The process of classifying a piece of source code into a Readable or Unreadable class is referred to as Code Readability Classification. To build accurate classification models, existing studies focus on handcrafting features from different aspects that intuitively seem to correlate with code readability, and then exploring various machine learning algorithms based on the newly proposed features. On the contrary, our work opens up a new way to tackle the problem by using the technique of deep learning. Specifically, we propose IncepCRM, a novel model based on the Inception architecture that can learn multi-scale features automatically from source code with little manual intervention. We apply the information of human annotators as the auxiliary input for training IncepCRM and empirically verify the performance of IncepCRM on three publicly available datasets. The results show that: 1) Annotator information is beneficial for model performance as confirmed by robust statistical tests (i.e., the Brunner-Munzel test and Cliff's delta); 2) IncepCRM can achieve an improved accuracy against previously reported models across all datasets. The findings of our study confirm the feasibility and effectiveness of deep learning for code readability classification.

Bibliographic Note

Research Unit(s) information for this record is provided by the author(s) concerned.

Citation Format(s)

An Inception Architecture-Based Model for Improving Code Readability Classification. / Mi, Qing; Keung, Jacky; Xiao, Yan; Mensah , Solomon; Mei, Xiupei.

2018. Paper presented at 22nd Evaluation and Assessment in Software Engineering Conference (EASE 2018), Christchurch, New Zealand.

Research output: Conference Papers (RGC: 31A, 31B, 32, 33)32_Refereed conference paper (no ISBN/ISSN)