FlexInfer: Breaking Memory Constraint via Flexible and Efficient Offloading for On-Device LLM Inference

Hongchao Du (Co-first Author), Shangyu Wu (Co-first Author), Arina Kharlamova, Nan Guan, Chun Jason Xue

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

11 Downloads (CityUHK Scholars)

Abstract

Large Language Models (LLMs) face challenges for on-device inference due to high memory demands. Traditional methods to reduce memory usage often compromise performance and lack adaptability. We propose FlexInfer, an optimized offloading framework for on-device inference, addressing these issues with techniques like asynchronous prefetching, balanced memory locking, and flexible tensor preservation. These strategies enhance memory efficiency and mitigate I/O bottlenecks, ensuring high performance within userspecified resource constraints. Experiments demonstrate that FlexInfer significantly improves throughput under limited resources, achieving up to 12.5 times better performance than existing methods and facilitating the deployment of large models on resource-constrained devices. © 2025 Copyright held by the owner/author(s).
Original languageEnglish
Title of host publicationEuroMLSys '25
Subtitle of host publicationProceedings of the 5th Workshop on Machine Learning and Systems
PublisherAssociation for Computing Machinery
Pages56-65
ISBN (Electronic)979-8-4007-1538-9
DOIs
Publication statusPublished - 1 Apr 2025
EventEuroMLSys '25: 5th Workshop on Machine Learning and Systems - Rotterdam, Netherlands
Duration: 30 Mar 20253 Apr 2025

Conference

ConferenceEuroMLSys '25: 5th Workshop on Machine Learning and Systems
Country/TerritoryNetherlands
CityRotterdam
Period30/03/253/04/25

Bibliographical note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s)

Research Keywords

  • LLM
  • On-Device Inference
  • Offloading
  • ResourceConstrained Devices

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'FlexInfer: Breaking Memory Constraint via Flexible and Efficient Offloading for On-Device LLM Inference'. Together they form a unique fingerprint.

Cite this