Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

5 Scopus Citations
View graph of relations

Author(s)

  • Jing Li
  • Yuchen Li
  • Zichuan Xu
  • Song Guo

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)3017-3030
Journal / PublicationIEEE Transactions on Mobile Computing
Volume22
Issue number5
Online published8 Nov 2021
Publication statusPublished - May 2023

Abstract

Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading the compute-intensive tasks to an MEC network for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. In this paper, we study a novel delay-aware DNN inference throughput maximization problem by accelerating each DNN inference through jointly exploring DNN partitioning and multi-thread parallelism. Specifically, we consider the problem under both offline and online request arrival settings: a set of DNN inference requests is given in advance, and a sequence of DNN inference requests arrives one by one without the knowledge of future arrivals, respectively. We first show that the defined problems are NP-hard. We then devise a novel constant approximation algorithm for the problem under the offline setting. We also propose an online algorithm with a provable competitive ratio for the problem under the online setting. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising.

Research Area(s)

  • algorithm design and analysis, Approximation algorithms, approximation and online algorithms, Computational modeling, computing and bandwidth resource allocation and optimization, delay-aware DNN inference, Delays, DNN model inference provisioning, DNN partitioning, Inference algorithms, inference parallelism, Intelligent IoT devices, Mobile Edge Computing (MEC), Parallel processing, Partitioning algorithms, Task analysis, throughput maximization

Citation Format(s)

Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism. / Li, Jing; Liang, Weifa; Li, Yuchen et al.
In: IEEE Transactions on Mobile Computing, Vol. 22, No. 5, 05.2023, p. 3017-3030.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review