Living Object Grasping Using Two-Stage Graph Reinforcement Learning

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1950-1957
Journal / PublicationIEEE Robotics and Automation Letters
Volume6
Issue number2
Online published19 Feb 2021
Publication statusPublished - Apr 2021

Abstract

Living objects are hard to grasp because they can actively dodge and struggle by writhing or deforming while or even prior to being contacted and modeling or predicting their responses to grasping is extremely difficult.This paper presents an algorithm based on reinforcement learning (RL) to attack this challenging problem. Considering the complexity of living object grasping, we divide the whole task into pre-grasp and in-hand stages and let the algorithm switch stages automatically. The pre-grasp stage is aimed at finding a good pose of a robot hand approaching a living object for performing a grasp. Dense reward functions are proposed for facilitating the learning of right hand actions based on the poses of both hand and object. Since an object held in hand may struggle to escape, the robot hand needs to adjust its configuration and respond correctly to the object's movement.Hence, the goal of the in-hand stage is to determine an appropriate adjustment of finger configuration in order for the robot hand to keep holding the object. At this stage, we treat the robot hand as a graph and use the graph convolutional network (GCN) to determine the hand action. We test our algorithm with both simulation and real experiments, which show its good performance in living object grasping. More results are available on our website: \url{https://sites.google.com/view/graph-rl}.

Research Area(s)

  • Deep Learning in Grasping and Manipulation, Dexterous Manipulation, Force, Grasping, In-Hand Manipulation, Reinforcement learning, Robot kinematics, Robot sensing systems, Robots, Task analysis