CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification

Junhui He, Shangyu Wu, Weidong Wen, Chun Jason Xue, Qingan Li*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

27 Downloads (CityUHK Scholars)

Abstract

Deploying large language models (LLMs) on edge devices presents significant challenges due to the substantial computational overhead and memory requirements.Activation sparsification can mitigate these resource challenges by reducing the number of activated neurons during inference.Existing methods typically employ thresholding-based sparsification based on the statistics of activation tensors.However, they do not model the impact of activation sparsification on performance, resulting in suboptimal performance degradation.To address the limitations, this paper reformulates the activation sparsification problem to explicitly capture the relationship between activation sparsity and model performance.Then, this paper proposes CHESS, a general activation sparsification approach via CHannel-wise thrEsholding and Selective Sparsification.First, channel-wise thresholding assigns a unique threshold to each activation channel in the feed-forward network (FFN) layers.Then, selective sparsification involves applying thresholding-based activation sparsification to specific layers within the attention modules.Finally, we detail the implementation of sparse kernels to accelerate LLM inference.Experimental results demonstrate that the proposed CHESS achieves lower performance degradation over eight downstream tasks while activating fewer parameters than existing methods, thus speeding up the LLM inference by up to 1.27x. © 2024 Association for Computational Linguistics.
Original languageEnglish
Title of host publicationProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
PublisherAssociation for Computational Linguistics
Pages18658-18668
ISBN (Print)9798891761643
DOIs
Publication statusPublished - Nov 2024
Event29th Conference on Empirical Methods in Natural Language Processing (EMNLP 2024) - Hybrid, Miami, United States
Duration: 12 Nov 202416 Nov 2024
https://2024.emnlp.org/

Publication series

NameEMNLP - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference

Conference

Conference29th Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
Abbreviated titleEMNLP 2024
PlaceUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Bibliographical note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification'. Together they form a unique fingerprint.

Cite this