Multi-Modal Autonomous Ultrasound Scanning for Efficient Human–Machine Fusion Interaction
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Number of pages | 12 |
Journal / Publication | IEEE Transactions on Automation Science and Engineering |
Online published | 29 Feb 2024 |
Publication status | Online published - 29 Feb 2024 |
Link(s)
Abstract
Robotic autonomous ultrasound imaging is a challenging task as robots require strong analytical capabilities to make sound decisions in complex spatial relationships. In this paper, we integrate visual and tactile information into the ultrasound robotic system drawing inspiration from the process of human doctors conducting ultrasound scans, and explore the impact of different modalities of information on our task. The proposed multimodal deep reinforcement learning (DRL) framework can integrate real-time visual feedback and tactile perception, and directly output 6D pose decisions to control the ultrasound probe, thereby achieving fully autonomous ultrasound imaging of soft, movable, and unmarked targets. We demonstrate the feasibility of our method on a simulation platform and propose an effective model transfer learning method. Subsequently, we conducted further evaluations of the approach in a real-world environment. The results indicate that our approach effectively enhances the performance of autonomous ultrasound scanning and manual adjustments further optimize the outcomes. Note to Practitioners—This work is motivated by the increasing demand for intelligent human-machine interaction in medical applications. By improving the automation of traditional medical scanning procedures such as ultrasound scanning, the efficiency of medical scanning can be greatly improved. In this work, we propose a multi-modal autonomous ultrasound scanning system based on DRL, which can be applied to improve the efficiency of human-machine interaction in medical environments to execute daily health screening or used in emergency situations. © 2024 IEEE.
Research Area(s)
- Autonomous ultrasound scanning, deep reinforcement learning, Medical diagnostic imaging, multimodal, Navigation, Probes, Robot sensing systems, Robots, Task analysis, Ultrasonic imaging
Citation Format(s)
Multi-Modal Autonomous Ultrasound Scanning for Efficient Human–Machine Fusion Interaction. / Luo, Chengwen; Chen, Yuhao; Cao, Haozheng et al.
In: IEEE Transactions on Automation Science and Engineering, 29.02.2024.
In: IEEE Transactions on Automation Science and Engineering, 29.02.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review