V2-SfMLearner : Learning Monocular Depth and Ego-motion for Multimodal Wireless Capsule Endoscopy
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Number of pages | 15 |
Journal / Publication | IEEE Transactions on Automation Science and Engineering |
Publication status | Online published - 16 Jan 2025 |
Link(s)
Abstract
Deep learning can predict depth maps and capsule ego-motion from capsule endoscopy videos, aiding in 3D scene reconstruction and lesion localization. However, the collisions of the capsule endoscopies within the gastrointestinal tract cause vibration perturbations in the training data. Existing solutions focus solely on vision-based processing, neglecting other auxiliary signals like vibrations that could reduce noise and improve performance. Therefore, we propose V2-SfMLearner, a multimodal approach integrating vibration signals into vision-based depth and capsule motion estimation for monocular capsule endoscopy. We construct a multimodal capsule endoscopy dataset containing vibration and visual signals, and our artificial intelligence solution develops an unsupervised method using vision-vibration signals, effectively eliminating vibration perturbations through multimodal learning. Specifically, we carefully design a vibration network branch and a Fourier fusion module, to detect and mitigate vibration noises. The fusion framework is compatible with popular vision-only algorithms. Extensive validation on the multimodal dataset demonstrates superior performance and robustness against vision-only algorithms. Without the need for large external equipment, our V2-SfMLearner has the potential for integration into clinical capsule robots, providing real-time and dependable digestive examination tools. The findings show promise for practical implementation in clinical settings, enhancing the diagnostic capabilities of doctors. © 2004-2012 IEEE.
Research Area(s)
- Depth estimation, multimodal learning, robot ego-motion, unsupervised learning, vibration signal, wireless capsule endoscopy
Citation Format(s)
V2-SfMLearner: Learning Monocular Depth and Ego-motion for Multimodal Wireless Capsule Endoscopy. / Bai, Long; Cui, Beilei; Wang, Liangyu et al.
In: IEEE Transactions on Automation Science and Engineering, 16.01.2025.
In: IEEE Transactions on Automation Science and Engineering, 16.01.2025.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review