Learning Based Human-robot Physical Collaboration


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
  • Lidai WANG (Supervisor)
  • Jia Pan (External person) (External Co-Supervisor)
Award date9 Jan 2024


Human-robot collaboration (HRC) has fast-growing demand for wide applications both in service robotics and industrial robotics. Human-robot physical collaboration (pHRC) which falls within the range of HRC is defined as when the human and the robot have contact with each other and constitute a tightly coupled dynamical system to complete tasks. It is a challenge for the robot how to understand the human intention and play a more active role to finish pHRC tasks and reduce human effort. Though challenging, pHRC has extensive applications like cooperative object handling, collaborative manufacturing and co-manipulation for assembly. Thus we focus on challenging topics and expect our algorithms can be used in the above-mentioned applications.

To solve the aforementioned problems, we propose learning-based algorithms. To match the habit of human behavior, we present a controller based on reinforcement learning (RL) to optimize the parameters which used to be manually adjusted in the conventional controller and Long Short-Term Memory (LSTM) to predict human intention. We also investigate how to make robots directly understand a specific person’s intentions through natural language. For the role adaptation of the human and robot, we present an algorithm based on RL to give the confidence coefficient of the estimation of human intention and allocate the robot's role. Moreover, we investigate the a challenging and practical glazing task, called human-in-the-loop board insertion task in lab setup. This task requires more precise position and force control than the general pHRC tasks due to the millimeter tolerance and hard contact between the board and the frame. We propose a controller which combines RL and conventional controller to learn how to actively help the human align and insert the board.

The presented learning-based algorithms are tested in real environments and demonstrate good results. We observed that the prediction of human intention is effective through both haptic channel and natural language. We give the ability for the robot to automatically adjust its role. We achieved a higher success rate and a shorter completion time for the human-in-the-loop board insertion task compared with admittance control.