Abstract
Knowledge-enhanced text generation aims to enhance the quality of generated text by utilizing internal or external knowledge sources. While language models have demonstrated impressive capabilities in generating coherent and fluent text, the lack of interpretability presents a substantial obstacle. The limited interpretability of generated text significantly impacts its practical usability, particularly in knowledge-enhanced text generation tasks that necessitate reliability and explainability. Existing methods often employ domain-specific knowledge retrievers that are tailored to specific data characteristics, limiting their generalizability to diverse data types and tasks. To overcome this limitation, we directly leverage the two-tier architecture of structured knowledge, consisting of high-level entities and low-level knowledge triples, to design our task-agnostic structured knowledge hunter. Specifically, we employ a local-global interaction scheme for structured knowledge representation learning and a hierarchical transformer-based pointer network as the backbone for selecting relevant knowledge triples and entities. By combining the strong generative ability of language models with the high faithfulness of the knowledge hunter, our model achieves high interpretability, enabling users to comprehend the model's output generation process. Furthermore, we empirically demonstrate the effectiveness of our model in both internal knowledge-enhanced table-to-text generation on the RotoWire- FG dataset and external knowledge-enhanced dialogue response generation on the KdConv dataset. Our task-agnostic model outperforms state-of-the-art methods and corresponding language models, setting new standards on the benchmark. © 2024 IEEE.
| Original language | English |
|---|---|
| Pages (from-to) | 32-44 |
| Journal | IEEE Journal of Selected Topics in Signal Processing |
| Volume | 19 |
| Issue number | 1 |
| Online published | 13 Jun 2024 |
| DOIs | |
| Publication status | Published - Jan 2025 |
Bibliographical note
Research Unit(s) information for this publication is provided by the author(s) concerned.Funding
This work was supported in part by the National Natural Science Foundation of China under Grant 62371411, the Research Grants Council of the Hong Kong SAR under Grant GRF 11217823, InnoHK initiative, the Government of the HKSAR, Laboratory for AI-Powered Financial Technologies.
Research Keywords
- structured knowledge
- knowledge retrieval
- language models
- generation interpretability
RGC Funding Information
- RGC-funded
Fingerprint
Dive into the research topics of 'Towards Improving Interpretability of Language Model Generation through a Structured Knowledge Discovery Approach'. Together they form a unique fingerprint.Projects
- 1 Active
-
GRF: Towards Building An Adaptive Distributed Computation Framework for Massive Context Interplay
SONG, L. (Principal Investigator / Project Coordinator) & LAN, T. (Co-Investigator)
1/01/24 → …
Project: Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver