Towards Efficient Front-End Visual Sensing for Digital Retina: A Model-Centric Paradigm

Yihang Lou, Ling-Yu Duan*, Yong Luo, Ziqian Chen, Tongliang Liu, Shiqi Wang, Wen Gao

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Citations (Scopus)

Abstract

The digital retina excels at providing enhanced visual sensing and analysis capability for city brain in smart cities, and can feasibly convert the visual data from visual sensors into semantic features. With the deployment of deep learning or handcrafted models, these features are extracted on front-end devices, then delivered to back-end servers for advanced analysis. In this scenario, we propose a model generation, utilization and communication paradigm, aiming at strong front-end sensing capabilities for establishing better artificial visual systems in smart cities. In particular, we propose an integrated multiple deep learning models reuse and prediction strategy, which dramatically increases the feasibility of the digital retina in large-scale visual data analysis in smart cities. The proposed multi-model reuse scheme aims to reuse the knowledge from models cached and transmitted in digital retina to obtain more discriminative capability. To efficiently deliver these newly generated models, a model prediction scheme is further proposed by encoding and reconstructing model differences. Extensive experiments have been conducted to demonstrate the effectiveness of proposed model-centric paradigm.
Original languageEnglish
Article number8960464
Pages (from-to)3002-3013
JournalIEEE Transactions on Multimedia
Volume22
Issue number11
Online published15 Jan 2020
DOIs
Publication statusPublished - Nov 2020

Research Keywords

  • Digital retina
  • model communication
  • model reuse
  • visual sensing

Fingerprint

Dive into the research topics of 'Towards Efficient Front-End Visual Sensing for Digital Retina: A Model-Centric Paradigm'. Together they form a unique fingerprint.

Cite this