Generation of Safe and Efficient Motions for Robotic Arms in Human-Robot Collaboration

在人機協作場景下生成安全高效的機械臂運動

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
  • Jia PAN (Supervisor)
  • Yajing SHEN (Supervisor)
  • Jia Pan (External person) (External Co-Supervisor)
Award date18 Aug 2021

Abstract

Many manufacturing tasks such as assembly require a high level of dexterity and flexibility beyond the capability of the state-of-the-art autonomous robotics technique, and thus a human worker's involvement is indispensable. One promising way to improve the efficiency of such challenging tasks is the human-robot collaboration, where human workers focus on sub-tasks requiring high flexibility and tactile sensing, meanwhile the robot leverages its high-speed and accuracy to accomplish repetitive sub-tasks quickly. A smooth and effective human-robot collaboration requires the human and robot to share a limited workspace.

This thesis integrates the idea of considering human's habits into robot motion planning, which enables the robot to collaborate more safely and efficiently with the human. We started by formulating two human behavior properties into objective functions: (1) areas frequently occupied by humans and (2) human's prediction of robot motion.

For the first property, we design a cost function which has a higher value at the area more frequently occupied by human. In other words, this cost function acts as a criterion for a risky intersection of human's and robot's activity region. With this criterion, the robot can actively plan the trajectory ahead, which effectively avert the collision occurrence.

For the second property, we introduce a framework including two neural networks to mimic how two humans collaborate with each other, by which one neural network can self-find a motion controller which minimize the prediction error of the other network. With this framework, the robot can convey its intention to human through motion such that the human can adapt to the robot's work.

The offline planning methods can not respond to human timely, so we further suggest an imitation learning method that can control the robot according to the observation of human partner. In this work, we collect demonstrations of human-human interaction and use deep learning techniques to implicitly encode the human's preferences in the robot control policy.

Human may behave unexpectedly during work so it is necessary that the robot can update its motion online. Previous literature demands strict restrictions on human motion. In this thesis, we explored a reinforcement learning based method that enables the robot to promptly adapt to human partner while considering both human safety and task efficiency. This method can manipulate the robot to follow a trajectory calculated from offline planner when the human partner keeps a safe distance from the robot; otherwise, the controlled robot can temporarily leave the trajectory to bypass the obstacles, including humans.

In summary, this thesis presents four methods for generating robot motions to improve the smoothness, efficiency, and safety in human-robot collaboration.

    Research areas

  • human-robot collaboration, motion planning