Inverse Kinematics Embedded Network for Robust Patient Anatomy Avatar Reconstruction From Multimodal Data

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)3395-3402
Journal / PublicationIEEE Robotics and Automation Letters
Volume9
Issue number4
Online published19 Feb 2024
Publication statusPublished - Apr 2024

Abstract

Patient modelling has a wide range of applications in medicine and healthcare, such as clinical teaching, surgery navigation and automatic robotized scanning. While patients are typically covered or occluded in medical scenes, directly regressing human meshes from single RGB images is challenging. To this end, we design a deep learning-based patient anatomy reconstruction network from RGB-D images with three key modules: 1) the attention-based multimodal fusion module, 2) the analytical inverse kinematics module and 3) the anatomical layer module. In our pipeline, the color and depth modality are fully fused by the multimodal attention module to obtain a cover-insensitive feature map. The estimated 3D keypoints, learned from the fused feature, are further converted to patient model parameters through the embedded analytical inverse kinematics module. To capture more detailed patient structures, we also present a parametric anatomy avatar by extending the Skinned Multi-Person Linear Model (SMPL) with internal bone and artery models. Final meshes are driven by the predicted parameters via the anatomical layer module, generating digital twins of patients. Experimental results on the Simultaneously-Collected Multimodal Lying Pose Dataset demonstrate that our approach surpasses state-of-the-art human mesh recovery methods and shows robustness to occlusions. © 2024 IEEE.

Research Area(s)

  • deep learning for visual perception, Gesture, modeling and simulating humans, posture and facial expressions, RGB-D perception