Abstract
As two fundamental representation modalities of 3D objects, 3D point clouds and multi-view 2D images record shape information from different domains of geometric structures and visual appearances. In the current deep learning era, remarkable progress in processing such two data modalities has been achieved through respectively customizing compatible 3D and 2D network architectures. However, unlike multi-view image-based 2D visual modeling paradigms, which have shown leading performance in several common 3D shape recognition benchmarks, point cloud-based 3D geometric modeling paradigms are still highly limited by insufficient learning capacity due to the difficulty of extracting discriminative features from irregular geometric signals. In this paper, we explore the possibility of boosting deep 3D point cloud encoders by transferring visual knowledge extracted from deep 2D image encoders under a standard teacher-student distillation workflow. Generally, we propose PointMCD, a unified multi-view cross-modal distillation architecture, including a pretrained deep image encoder as the teacher and a deep point encoder as the student. To perform heterogeneous feature alignment between 2D visual and 3D geometric domains, we further investigate visibility-aware feature projection (VAFP), by which point-wise embeddings are reasonably aggregated into view-specific geometric descriptors. By pair-wisely aligning multi-view visual and geometric descriptors, we can obtain more powerful deep point encoders without exhausting and complicated network modification. Experiments on 3D shape classification, part segmentation, and unsupervised learning strongly validate the effectiveness of our method. © 2023 IEEE.
| Original language | English |
|---|---|
| Pages (from-to) | 754-767 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Multimedia |
| Volume | 27 |
| Online published | 22 Jun 2023 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Information for this record is supplemented by the author(s) concerned.Funding
This work was supported by Hong Kong Research Grants Council under Grants 11202320 and 11219422.
Research Keywords
- 3D point cloud
- multi-view images
- cross-modal
- knowledge distillation
- 3D shape recognition
RGC Funding Information
- RGC-funded
Fingerprint
Dive into the research topics of 'PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal Distillation for 3D Shape Recognition'. Together they form a unique fingerprint.-
GRF: Learning from 4D Light Fields for Clear Vision in Poor Visibility Environments
HOU, J. (Principal Investigator / Project Coordinator)
1/01/22 → …
Project: Research
-
GRF: Learning-based Three-dimensional Point Cloud Data Reconstruction and Processing
HOU, J. (Principal Investigator / Project Coordinator)
1/01/21 → 23/12/24
Project: Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver