Research on Image-based Navigation and Control in Robotic Surgery

機器人手術中基於圖像的導航和控製研究

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
  • Dong SUN (Supervisor)
  • Changan Zhu (External person) (External Supervisor)
  • Erbao Dong (External person) (Supervisor)
Award date31 Oct 2022

Abstract

Robot-assisted minimally invasive surgery has great potential to facilitate modern surgery in clinical practice due to its many advantages over traditional open surgeries, such as less pain, better and faster recovery. Given the challenges in an RMIS environment, such as the narrow field of view and intensive workload for surgeons, enhancing the automatic context awareness of the system plays an essential role in improving surgeon performance and patient safety. Automatic segmentation and pose estimation of surgical instruments are fundamental ingredients toward intelligent context awareness in robotic surgery. They are prerequisites for solving related problems, such as action recognition and instrument control. However, the automatic segmentation and pose estimation of surgical instruments are challenging due to motion blur, specular reflection, tissue occlusion and gas in surgery. Meanwhile, surgeons need to frequently pause the operation of their surgical instruments and adjust the laparoscope during surgery to provide a better field of view, distracting them, prolonging their operation time and leading to fatigue. This thesis aims to address these challenges in robotic surgery in the following three parts:

First, an instance segmentation network based on Mask R-CNN is developed to achieve an automatic and accurate segmentation of surgical instruments in robot-assisted minimally invasive surgery. Compared with MF-TAPNet, a state-of-the-art method for this task, the IoU and Dice of the surgical instrument part segmentation task are increased by 4.41% and 3.51% respectively, and the IoU and Dice of the surgical instrument type segmentation task are increased by 5.66% and 3.85% respectively on the public dataset. In addition, the accuracy of surgical instrument segmentation is improved by combining the public dataset with our labelled in-house dataset for cross-dataset evaluation using different sampling strategies. This work demonstrates the promising generalization capability of the proposed network in surgical instruments instance segmentation. It also provides new guidance for improving the annotation efficiency whilst building a new dataset.

Second, a deep neural network framework is designed based on object detection for the 2D pose estimation of multiple articulated instruments in surgical images and videos. This methodology detects surgical instruments and their degrees of freedom without using kinematic information from robotic encoders or external tracking sensors. It overcomes the shortcomings of traditional attitude estimation methods based on heatmap regression. This method also regresses the pixel coordinates of the keypoints on surgical instruments while ensuring detection accuracy across different surgical processes, which is essential for adjusting the field of view of the continuum laparoscope and evaluating the surgical skills of surgeons.

Third, a continuum laparoscope with a data-driven control method and learning-based visual feedback is applied to automatically adjust the field of view by tracking the instruments in robotic surgery. A nonlinear system identification method using the Koopman operator and Chebyshev polynomials is also developed. Then an LQR controller is designed based on the Koopman operator that is trained using visual feedback. The pixel coordinates of the keypoints on surgical instruments are used as visual feedback for the control system. Simulation and experiment results validate the feasibility of using the proposed method while controlling a continuum laparoscope to adjust the field of view automatically, and the accuracy of the proposed method can meet the needs of clinical surgery.

In summary, image-based navigation and control in this thesis enhance the automatic context awareness of the surgical robot system, show the great potential of the continuum manipulators in robotic surgery and provide new ideas for the automation of surgical subtask procedures.

    Research areas

  • robotic surgery, medical image analysis, deep learning, continuum robot, data-driven control