ESP-PCT : Enhanced VR Semantic Performance through Efficient Compression of Temporal and Spatial Redundancies in Point Cloud Transformers
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24) |
Editors | Kate Larson |
Publisher | International Joint Conferences on Artificial Intelligence |
Pages | 1182-1190 |
ISBN (electronic) | 9781956792041 |
Publication status | Published - Aug 2024 |
Publication series
Name | IJCAI International Joint Conference on Artificial Intelligence |
---|---|
ISSN (Print) | 1045-0823 |
Conference
Title | 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024) |
---|---|
Location | International Convention Center Jeju |
Place | Korea, Republic of |
City | Jeju Island |
Period | 3 - 9 August 2024 |
Link(s)
Abstract
Semantic recognition is pivotal in virtual reality (VR) applications, enabling immersive and interactive experiences. A promising approach is utilizing millimeter-wave (mmWave) signals to generate point clouds. However, the high computational and memory demands of current mmWave point cloud models hinder their efficiency and reliability. To address this limitation, our paper introduces ESP-PCT, a novel Enhanced Semantic Performance Point Cloud Transformer with a two-stage semantic recognition framework tailored for VR applications. ESP-PCT takes advantage of the accuracy of sensory point cloud data and optimizes the semantic recognition process, where the localization and focus stages are trained jointly in an end-to-end manner. We evaluate ESP-PCT on various VR semantic recognition conditions, demonstrating substantial enhancements in recognition efficiency. Notably, ESP-PCT achieves a remarkable accuracy of 93.2% while reducing the computational requirements (FLOPs) by 76.9% and memory usage by 78.2% compared to the existing Point Transformer model simultaneously. These underscore ESP-PCT's potential in VR semantic recognition by achieving high accuracy and reducing redundancy. The code and data of this project are available at https://github.com/lymei-SEU/ESP-PCT. © 2024 International Joint Conferences on Artificial Intelligence. All rights reserved.
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
ESP-PCT: Enhanced VR Semantic Performance through Efficient Compression of Temporal and Spatial Redundancies in Point Cloud Transformers. / Mei, Luoyu; Wang, Shuai; Cheng, Yun et al.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24). ed. / Kate Larson. International Joint Conferences on Artificial Intelligence, 2024. p. 1182-1190 (IJCAI International Joint Conference on Artificial Intelligence).
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24). ed. / Kate Larson. International Joint Conferences on Artificial Intelligence, 2024. p. 1182-1190 (IJCAI International Joint Conference on Artificial Intelligence).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review