Multi-Modal Autonomous Ultrasound Scanning for Efficient Human–Machine Fusion Interaction

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

  • Chengwen Luo
  • Yuhao Chen
  • Haozheng Cao
  • Mustafa A. Al Sibahee
  • Jin Zhang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Number of pages12
Journal / PublicationIEEE Transactions on Automation Science and Engineering
Online published29 Feb 2024
Publication statusOnline published - 29 Feb 2024

Abstract

Robotic autonomous ultrasound imaging is a challenging task as robots require strong analytical capabilities to make sound decisions in complex spatial relationships. In this paper, we integrate visual and tactile information into the ultrasound robotic system drawing inspiration from the process of human doctors conducting ultrasound scans, and explore the impact of different modalities of information on our task. The proposed multimodal deep reinforcement learning (DRL) framework can integrate real-time visual feedback and tactile perception, and directly output 6D pose decisions to control the ultrasound probe, thereby achieving fully autonomous ultrasound imaging of soft, movable, and unmarked targets. We demonstrate the feasibility of our method on a simulation platform and propose an effective model transfer learning method. Subsequently, we conducted further evaluations of the approach in a real-world environment. The results indicate that our approach effectively enhances the performance of autonomous ultrasound scanning and manual adjustments further optimize the outcomes. Note to Practitioners—This work is motivated by the increasing demand for intelligent human-machine interaction in medical applications. By improving the automation of traditional medical scanning procedures such as ultrasound scanning, the efficiency of medical scanning can be greatly improved. In this work, we propose a multi-modal autonomous ultrasound scanning system based on DRL, which can be applied to improve the efficiency of human-machine interaction in medical environments to execute daily health screening or used in emergency situations. © 2024 IEEE.

Research Area(s)

  • Autonomous ultrasound scanning, deep reinforcement learning, Medical diagnostic imaging, multimodal, Navigation, Probes, Robot sensing systems, Robots, Task analysis, Ultrasonic imaging