Few-shot Question Generation for Reading Comprehension
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10) |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 21-27 |
ISBN (print) | 9798891761551 |
Publication status | Published - Aug 2024 |
Conference
Title | 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10) |
---|---|
Location | Hybrid |
Place | Thailand |
City | Bangkok |
Period | 16 August 2024 |
Link(s)
Abstract
According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines. © 2024 Association for Computational Linguistics.
Research Area(s)
Citation Format(s)
Few-shot Question Generation for Reading Comprehension. / Poon, Yin; Lee, John S. Y.; Lam, Yu Yan et al.
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10). Association for Computational Linguistics (ACL), 2024. p. 21-27.
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10). Association for Computational Linguistics (ACL), 2024. p. 21-27.
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review