Skip to main navigation Skip to search Skip to main content

Federated reasoning LLMs: a survey

Shuyue WEI, Yongxin TONG*, Zimu ZHOU*, Yi XU*, Jingkai GAO, Tongyu WEI, Tianran HE, Weifeng LV*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

248 Downloads (CityUHK Scholars)

Abstract

Reasoning has long been regarded as a distinctive hallmark of human cognition, and recent advances in the artificial intelligence community have increasingly focused on the reasoning large language models (rLLMs). However, due to strict privacy regulations, the domain-specific reasoning knowledge is often distributed across multiple data owners, limiting the rLLM’s ability to fully leverage such valuable resources. In this context, federated learning (FL) has gained increasing attention in both the academia and industry as a promising privacy-preserving paradigm for addressing the challenges in the data-efficient training of rLLMs.
In this paper, we conduct a comprehensive survey on federated rLLMs and propose a novel taxonomy based on training signals, including training signals derived from raw data, learned representations, and preference feedback. For each category, we emphasize the emerging trends according to how to use FL to enhance reasoning capabilities of rLLMs considering the model effectiveness, communication cost and privacy preservation. Finally, we envision future research directions and challenges based on insights from existing studies.
© The Author(s) 2025.
Original languageEnglish
Article number1912613
Number of pages23
JournalFrontiers of Computer Science
Volume19
Issue number12
Online published26 Jun 2025
DOIs
Publication statusPublished - Dec 2025

Funding

This work was partially supported by the National Natural Science Foundation of China (NSFC) (Grant Nos. 62425202, U21A20516, 62336003), the Beijing Natural Science Foundation (Z230001), the Fundamental Research Funds for the Central Universities (No. JK2024-03), the Didi Collaborative Research Program and the State Key Laboratory of Complex & Critical Software Environment (SKLCCSE). Zimu Zhou’s research is supported by Chow Sang Sang Group Research Fund (No. 9229139).

Research Keywords

  • federated learning
  • reasoning LLMs
  • fine tuning
  • retrieval-augmented generation

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Federated reasoning LLMs: a survey'. Together they form a unique fingerprint.

Cite this