SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully

Jushi Kai, Tianhang Zhang, Hai Hu, Zhouhan Lin*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

2 Citations (Scopus)
1 Downloads (CityUHK Scholars)

Abstract

Large language models (LLMs) demonstrate great performance in text generation. However, LLMs are still suffering from hallucinations. In this work, we propose an inference-time method, Self-Highlighted Hesitation (SH2), to help LLMs decode more truthfully. SH2 is based on a simple fact rooted in information theory that for an LLM, the tokens predicted with lower probabilities are prone to be more informative than others. Our analysis shows that these low-confidence tokens are more likely to be closely related to factual information, such as nouns, proper nouns, and adjectives. Therefore, we propose to “highlight” the factual information by selecting key tokens with the lowest probabilities and concatenating them to the original context, thus forcing the model to repeatedly read and hesitate on these tokens before generation. During decoding, we also adopt contrastive decoding to emphasize the difference in output probabilities brought by the hesitation. Experimental results demonstrate that our SH2, requiring no additional data or models, can effectively help LLMs elicit factual knowledge and distinguish hallucinated contexts by themselves. Significant and consistent improvements are achieved by SH2 for LLaMA-7b, LLaMA2-7b and Mistral-7b on various hallucination tasks. 

©2024 Association for Computational Linguistics

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationEMNLP 2024
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
PublisherAssociation for Computational Linguistics
Pages4514-4530
Number of pages17
ISBN (Print)9798891761681
DOIs
Publication statusPublished - Nov 2024
Externally publishedYes
Event29th Conference on Empirical Methods in Natural Language Processing (EMNLP 2024) - Hybrid, Miami, United States
Duration: 12 Nov 202416 Nov 2024
https://2024.emnlp.org/

Conference

Conference29th Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
Abbreviated titleEMNLP 2024
PlaceUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Funding

This work was sponsored by the National Key Research and Development Program of China (No. 2023ZD0121402) and National Natural Science Foundation of China (NSFC) grant (No.62106143).

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully'. Together they form a unique fingerprint.

Cite this