TY - JOUR
T1 - SpikingMamba
T2 - Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba
AU - Huang, Yulong
AU - Tang, Jianxiong
AU - Wang, Chao
AU - Wang, Ziyi
AU - Zhang, Jianguo
AU - Lu, Zhichao
AU - Cheng, Bojun
AU - Leng, Luziwei
PY - 2026/1
Y1 - 2026/1
N2 - Large Language Models (LLMs) have achieved remarkable performance across tasks but remain energy-intensive due to dense matrix operations. Spiking neural networks (SNNs) improve energy efficiency by replacing dense matrix multiplications with sparse accumulations. Their sparse spike activity enables efficient LLMs deployment on edge devices. However, prior SNN-based LLMs often sacrifice performance for efficiency, and recovering accuracy typically requires full pretraining, which is costly and impractical. To address this, we propose SpikingMamba, an energy-efficient SNN-based LLMs distilled from Mamba that improves energy efficiency with minimal accuracy sacrifice. SpikingMamba integrates two key components: (a) SI-LIF, a signed-integer spiking neuron that preserves semantic polarity through signed multi-level spike representations. (b) A training-exclusive Smoothed Gradient Compensation (SGC) path mitigating quantization loss while preserving spike-driven efficiency. We employ a single-stage distillation strategy to transfer the zero-shot ability of pretrained Mamba and further enhance it via reinforcement learning (RL). Experiments show that SpikingMamba-1.3B achieves a 4.76× energy benefit, with only a 4.78% zero-shot accuracy gap compared to the original Mamba. The model achieves a further 2.55% accuracy improvement after RL, narrowing the performance gap from 4.78% to 2.23%. © 2026, Transactions on Machine Learning Research. All rights reserved.
AB - Large Language Models (LLMs) have achieved remarkable performance across tasks but remain energy-intensive due to dense matrix operations. Spiking neural networks (SNNs) improve energy efficiency by replacing dense matrix multiplications with sparse accumulations. Their sparse spike activity enables efficient LLMs deployment on edge devices. However, prior SNN-based LLMs often sacrifice performance for efficiency, and recovering accuracy typically requires full pretraining, which is costly and impractical. To address this, we propose SpikingMamba, an energy-efficient SNN-based LLMs distilled from Mamba that improves energy efficiency with minimal accuracy sacrifice. SpikingMamba integrates two key components: (a) SI-LIF, a signed-integer spiking neuron that preserves semantic polarity through signed multi-level spike representations. (b) A training-exclusive Smoothed Gradient Compensation (SGC) path mitigating quantization loss while preserving spike-driven efficiency. We employ a single-stage distillation strategy to transfer the zero-shot ability of pretrained Mamba and further enhance it via reinforcement learning (RL). Experiments show that SpikingMamba-1.3B achieves a 4.76× energy benefit, with only a 4.78% zero-shot accuracy gap compared to the original Mamba. The model achieves a further 2.55% accuracy improvement after RL, narrowing the performance gap from 4.78% to 2.23%. © 2026, Transactions on Machine Learning Research. All rights reserved.
UR - http://www.scopus.com/inward/record.url?scp=105030244968&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-105030244968&origin=recordpage
M3 - RGC 21 - Publication in refereed journal
SN - 2835-8856
VL - 2026-January
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -