Patching in Order : Efficient On-Device Model Fine-Tuning for Multi-DNN Vision Applications

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

  • Zhiqiang Cao
  • Yun Cheng
  • Anqi Lu
  • Youbing Hu
  • Jie Liu
  • Min Zhang
  • Zhijun Li

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Mobile Computing
Publication statusOnline published - 14 Aug 2024

Abstract

The increasing deployment of multiple deep neural networks (DNNs) on edge devices is revolutionizing mobile vision applications, spanning autonomous vehicles, augmented reality, and video surveillance. These applications demand adaptation to contextual and environmental drifts, typically through fine-tuning on edge devices without cloud access, due to increasing data privacy concerns and the urgency for timely responses. However, fine-tuning multiple DNNs on edge devices faces significant challenges due to the substantial computational workload. In this paper, we present PatchLine, a novel framework tailored for efficient on-device training in the form of fine-tuning for multi-DNN vision applications. At the core of PatchLine is an innovative lightweight adapter design called patches coupled with a strategic patch updating approach across models. Specifically, PatchLine adopts drift-adaptive incremental patching, correlation-aware warm patching, and entropy-based sample selection, to holistically reduce the number of trainable parameters, training epochs, and training samples. Experiments on four datasets, three vision tasks, four backbones, and two platforms demonstrate that PatchLine reduces the total computational cost by an average of 55% without sacrificing accuracy compared to the state-of-the-art. © 2024 IEEE.

Research Area(s)

  • model adaptation, multi-DNN, on-device, Patch