Few-shot Question Generation for Reading Comprehension

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Author(s)

  • Yin Poon
  • Yu Yan Lam
  • Wing Lam Suen
  • Elsie Li Chen Ong
  • Samuel Kai Wah Chu

Detail(s)

Original languageEnglish
Title of host publicationProceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
PublisherAssociation for Computational Linguistics (ACL)
Pages21-27
ISBN (print)9798891761551
Publication statusPublished - Aug 2024

Conference

Title10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
LocationHybrid
PlaceThailand
CityBangkok
Period16 August 2024

Abstract

According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines. © 2024 Association for Computational Linguistics.

Research Area(s)

Citation Format(s)

Few-shot Question Generation for Reading Comprehension. / Poon, Yin; Lee, John S. Y.; Lam, Yu Yan et al.
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10). Association for Computational Linguistics (ACL), 2024. p. 21-27.

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review