3D Question Answering

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

4 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Visualization and Computer Graphics
Publication statusOnline published - 29 Nov 2022

Abstract

Visual question answering (VQA) has experienced tremendous progress in recent years. However, most efforts have onlyfocused on 2D image question-answering tasks. In this paper, we extend VQA to its 3D counterpart, 3D question answering (3DQA),which can facilitate a machine’s perception of 3D real-world scenarios. Unlike 2D image VQA, 3DQA takes the color point cloud asinput and requires both appearance and 3D geometrical comprehension to answer the 3D-related questions. To this end, we propose anovel transformer-based 3DQA framework “3DQA-TR”, which consists of two encoders to exploit the appearance and geometryinformation, respectively. Finally, the multi-modal information about the appearance, geometry, and linguistic question can attend toeach other via a 3D-linguistic Bert to predict the target answers. To verify the effectiveness of our proposed 3DQA framework, wefurther develop the first 3DQA dataset “ScanQA”, which builds on the ScanNet dataset and contains over 10K question-answer pairsfor 806 scenes. To the best of our knowledge, ScanQA is the first large-scale dataset with natural-language questions and free-formanswers in 3D environments that is fully human-annotated. We also use several visualizations and experiments to investigate theastonishing diversity of the collected questions and the significant differences between this task from 2D VQA and 3D captioning.Extensive experiments on this dataset demonstrate the obvious superiority of our proposed 3DQA framework over state-of-the-art VQAframeworks and the effectiveness of our major designs. Our code and dataset will be made publicly available to facilitate research inthis direction. The code and data are available at http://shuquanye.com/3DQA website/. 

Research Area(s)

  • Point cloud, scene understanding

Citation Format(s)

3D Question Answering. / Ye, Shuquan; Chen, Dongdong; Han, Songfang et al.
In: IEEE Transactions on Visualization and Computer Graphics, 29.11.2022.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review