Skip to main navigation Skip to search Skip to main content

Single-Frame-Based Deep View Synchronization for Unsynchronized Multicamera Surveillance

Qi Zhang*, Antoni B. Chan

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

36 Downloads (CityUHK Scholars)

Abstract

Multicamera surveillance has been an active research topic for understanding and modeling scenes. Compared to a single camera, multicameras provide larger field-of-view and more object cues, and the related applications are multiview counting, multiview tracking, 3-D pose estimation or 3-D reconstruction, and so on. It is usually assumed that the cameras are all temporally synchronized when designing models for these multicamera-based tasks. However, this assumption is not always valid, especially for multicamera systems with network transmission delay and low frame rates due to limited network bandwidth, resulting in desynchronization of the captured frames across cameras. To handle the issue of unsynchronized multicameras, in this article, we propose a synchronization model that works in conjunction with existing deep neural network (DNN)-based multiview models, thus avoiding the redesign of the whole model. We consider two variants of the model, based on where in the pipeline the synchronization occurs, scene-level synchronization and camera-level synchronization. The view synchronization step and the task-specific view fusion and prediction step are unified in the same framework and trained in an end-to-end fashion. Our view synchronization models are applied to different DNNs-based multicamera vision tasks under the unsynchronized setting, including multiview counting and 3-D pose estimation, and achieve good performance compared to baselines.
Original languageEnglish
Pages (from-to)10653-10667
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number12
Online published16 May 2022
DOIs
Publication statusPublished - Dec 2023

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Funding

This work was supported by Research Grant Council of Hong Kong Special Administrative Region (SAR), China, under Grant TR32-101/15-R and Grant CityU 11212518.

Research Keywords

  • Synchronization
  • Cameras
  • Task analysis
  • Pose estimation
  • Computational modeling
  • Geometry
  • Crowd counting
  • deep learning
  • image matching
  • pose estimation
  • surveillance
  • VIDEO SYNCHRONIZATION

Publisher's Copyright Statement

  • COPYRIGHT TERMS OF DEPOSITED POSTPRINT FILE: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Zhang, Q., & Chan, A. B. (2023). Single-Frame-Based Deep View Synchronization for Unsynchronized Multicamera Surveillance. IEEE Transactions on Neural Networks and Learning Systems, 34(12), 10653-10667. https://doi.org/10.1109/TNNLS.2022.3170642

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Single-Frame-Based Deep View Synchronization for Unsynchronized Multicamera Surveillance'. Together they form a unique fingerprint.

Cite this