Wide-Area Crowd Counting via Ground-Plane Density Maps and Multi-View Fusion CNNs

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

126 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
PublisherInstitute of Electrical and Electronics Engineers, Inc.
Pages8289-8298
ISBN (print)9781728132938
Publication statusPublished - Jun 2019

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2019-June
ISSN (Print)1063-6919
ISSN (electronic)2575-7075

Conference

Title32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019)
PlaceUnited States
CityLong Beach
Period16 - 20 June 2019

Abstract

Crowd counting in single-view images has achieved outstanding performance on existing counting datasets. However, single-view counting is not applicable to large and wide scenes (e.g., public parks, long subway platforms, or event spaces) because a single camera cannot capture the whole scene in adequate detail for counting, e.g., when the scene is too large to fit into the field-of-view of the camera, too long so that the resolution is too low on faraway crowds, or when there are too many large objects that occlude large portions of the crowd. Therefore, to solve the wide-area counting task requires multiple cameras with overlapping fields-of-view. In this paper, we propose a deep neural network framework for multi-view crowd counting, which fuses information from multiple camera views to predict a scene-level density map on the ground-plane of the 3D world. We consider 3 versions of the fusion framework: the late fusion model fuses camera-view density map; the naive early fusion model fuses camera-view feature maps; and the multi-view multi-scale early fusion model favors that features aligned to the same ground-plane point have consistent scales. We test our 3 fusion models on 3 multi-view counting datasets, PETS2009, DukeMTMC, and a newly collected multi-view counting dataset containing a crowded street intersection. Our methods achieve state-of-the-art results compared to other multi-view counting baselines.

Research Area(s)

  • Categorization, Grouping and S, Motion and Tracking, Recognition: Detection, Retrieval, Scene Analysis and Understanding, Segmentation

Citation Format(s)

Wide-Area Crowd Counting via Ground-Plane Density Maps and Multi-View Fusion CNNs. / Zhang, Qi; Chan, Antoni B.
Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019. Institute of Electrical and Electronics Engineers, Inc., 2019. p. 8289-8298 8953461 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Vol. 2019-June).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review