MELA: Multilingual Evaluation of Linguistic Acceptability

Ziyin Zhang (Co-first Author), Yikang Liu (Co-first Author), Weifang Huang, Junyu Mao, Rui Wang*, Hai Hu*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

47 Downloads (CityUHK Scholars)

Abstract

In this work, we present the largest benchmark to date on linguistic acceptability: Multilingual Evaluation of Linguistic Acceptability—MELA, with 46K samples covering 10 languages from a diverse set of language families. We establish LLM baselines on this benchmark, and investigate cross-lingual transfer in acceptability judgements with XLM-R. In pursuit of multilingual interpretability, we conduct probing experiments with fine-tuned XLM-R to explore the process of syntax capability acquisition. Our results show that GPT-4o exhibits a strong multilingual ability, outperforming fine-tuned XLM-R, while open-source multilingual models lag behind by a noticeable gap. Cross-lingual transfer experiments show that transfer in acceptability judgment is non-trivial: 500 Icelandic fine-tuning examples lead to 23 MCC performance in a completely unrelated language—Chinese. Results of our probing experiments indicate that training on MELA improves the performance of XLM-R on syntax-related tasks. https://github.com/sjtu-compling/MELA. © 2024 Association for Computational Linguistics.

Original languageEnglish
Title of host publicationProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics
EditorsLun-Wei Ku, Andre Martins, Vivek Srikumar
PublisherAssociation for Computational Linguistics
Pages2658-2674
Volume1: Long Papers
ISBN (Electronic)9798891760943
DOIs
Publication statusPublished - Aug 2024
Externally publishedYes
Event62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Bangkok, Thailand
Duration: 11 Aug 202416 Aug 2024
https://2024.aclweb.org/

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

Conference62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
PlaceThailand
CityBangkok
Period11/08/2416/08/24
Internet address

Funding

We thank Bingjie Mao, Jinchi Jiang, Wen Shi, Jialin Guo, Chenhui Liu and Licen Liu for their help in data collection. We also appreciate the suggestions and comments from the anonymous reviewers. This study is supported by Shanghai Pujiang Program awarded to Hai Hu (22PJC063). Ziyin Zhang and Rui Wang are partially supported by the National Natural Science Foundation of China (62176153) and the Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102, as the MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University)

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'MELA: Multilingual Evaluation of Linguistic Acceptability'. Together they form a unique fingerprint.

Cite this