Skip to main navigation Skip to search Skip to main content

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

Junyuan Hong (Co-first Author), Jinhao Duan (Co-first Author), Chenhui Zhang (Co-first Author), Zhangheng Li (Co-first Author), Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Compressing high-capability Large Language Models (LLMs) has emerged as a favored strategy for resource-efficient inferences. While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected. This study conducts the first, thorough evaluation of three (3) leading LLMs using five (5) SoTA compression techniques across eight (8) trustworthiness dimensions. Our experiments highlight the intricate interplay between compression and trustworthiness, revealing some interesting patterns. We find that quantization is currently a more effective approach than pruning in achieving efficiency and trustworthiness simultaneously. For instance, a 4-bit quantized model retains the trustworthiness of its original counterpart, but model pruning significantly degrades trustworthiness, even at 50% sparsity. Moreover, employing quantization within a moderate bit range could unexpectedly improve certain trustworthiness dimensions such as ethics and fairness. Conversely, extreme quantization to very low bit levels (3 bits) tends to reduce trustworthiness significantly. This increased risk cannot be uncovered by looking at benign performance alone, in turn, mandating comprehensive trustworthiness evaluation in practice. These findings culminate in practical recommendations for simultaneously achieving high utility, efficiency, and trustworthiness in LLMs. © 2024 by the author(s).
Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning
EditorsRuslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, Felix Berkenkamp
PublisherML Research Press
Pages18611-18633
Number of pages23
Publication statusPublished - Jul 2024
Externally publishedYes
Event41st International Conference on Machine Learning (ICML 2024) - Messe Wien Exhibition Congress Center, Vienna, Austria
Duration: 21 Jul 202427 Jul 2024
https://proceedings.mlr.press/v235/
https://icml.cc/

Publication series

NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Conference41st International Conference on Machine Learning (ICML 2024)
PlaceAustria
CityVienna
Period21/07/2427/07/24
Internet address

Funding

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and LLNL LDRD Program Project No. 23-ER-030 (LLNL-CONF-860188). This work is partially supported by the National Science Foundation under grant No. 1910100, No. 2046726, No. 2229876, No. 2319242 DARPA GARD, the National Aeronautics and Space Administration (NASA) under grant No. 80NSSC20M0229, Alfred P. Sloan Fellowship, and the eBay research grant. The work of Z. Wang is also supported by the National Science Foundation under Grant IIS-2212176.

Fingerprint

Dive into the research topics of 'Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression'. Together they form a unique fingerprint.

Cite this