Do Large Language Models Understand Conversational Implicature – A Case Study with a Chinese Sitcom

Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

2 Citations (Scopus)

Abstract

Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom My Own Swordsman. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and open-source LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models, including GPT3.5 and several open-source models, demonstrate a lower accuracy ranging from 20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation of the implicatures generated by LLMs on their reasonability, logic and fluency. While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs’ performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently. Our data and code are available at https://github.com/sjtu-compling/llm-pragmatics. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025

Original languageEnglish
Title of host publicationChinese Computational Linguistics
Subtitle of host publication23rd China National Conference, CCL 2024, Taiyuan, China, July 25–28, 2024, Proceedings
EditorsMaosong Sun, Jiye Liang, Xianpei Han, Zhiyuan Liu, Yulan He, Gaoqi Rao, Yubo Chen, Zhiliang Tian
PublisherSpringer Singapore
Pages402-418
Number of pages17
ISBN (Print)9789819783663
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event23rd China National Conference on Computational Linguistics, CCL 2024 - Taiyuan, China
Duration: 25 Jul 202428 Jul 2024

Publication series

NameLecture Notes in Computer Science
Volume14761
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference23rd China National Conference on Computational Linguistics, CCL 2024
PlaceChina
CityTaiyuan
Period25/07/2428/07/24

Funding

We thank Xinjia Qi, Qiyu Sun and Yaqian Zhang for verifying the implicatures and improving the dataset. We thank all participants for their support in this study. We also thank the anonymous reviewers for their valuable comments. This project is funded by Shanghai Pujiang Program (22PJC063) awarded to Hai Hu.

Research Keywords

  • large language models
  • pragmatic reasoning
  • the cooperative principle

Fingerprint

Dive into the research topics of 'Do Large Language Models Understand Conversational Implicature – A Case Study with a Chinese Sitcom'. Together they form a unique fingerprint.

Cite this