Learning from Demonstrations for Robotic Manipulation of Deformable Objects

Project: Research

View graph of relations


Manipulating deformable objects is a fundamental problem in robotics. For instance, from folding a bed sheet in the bedroom to tying suture in the surgical operating room, the ability to grasp and manipulate deformable objects is an important capability for assistive and autonomous robots to poses. Reliable and efficient robotic manipulation is also significant from an industrial and economic view, since flexible materials are found in almost every industrial product. In textile industry, the price for cloths is mainly determined by the costs of handling and processing the textiles, which are made by highly deformable materials; common tasks like lapping, (un)folding and positioning clothes have not yet been accomplished. In automotive industry, among the large variety of product components to be processed and assembled, many are partly or highly flexible, including hoses, 0-ring seals, and rubber seals. How to assemble these deformable objects is still an open problem, and this causes the assembly costs to be one of the most dominant factors in the final product price. In electrical manufacturing industry, winding coils made by flexible materials, such as plastic, is also a well-known challenge. The improvement in deformable object manipulation would also be important for electronic business. For instance, Amazon's automated warehouses are successful at removing much of the walking and searching for items within a warehouse. However, commercially viable automated picking deformable objects such as clothes, vegetables remains a difficult challenge, and this is one of the open problems that Amazon hopes to be solved in the Amazon picking challenge (http://amazonpickingchallenge.org/). We formulate manipulation of deformable objects as a problem of learning from demonstrations, where a human (expert) demonstrates the task one or more times, and a learning algorithm extracts the essence of these demonstrations so that the robot can perform these tasks autonomously under new, yet similar, situations. While this line of work has already shown encouraging results in some manipulation tasks such as scooping and rope knot tying, there are still fundamental limitations and robotic manipulation capabilities enabled through learning from demonstrations (or any other techniques) are still far from human-level capabilities. Despite many recent advances in manipulation of deformable objects, deformable objects continue to prove particularly challenging - which is often attributed to their large state spaces and time-variant kinematics/dynamics properties. Within the scope of this research, we propose to investigate learning from demonstrations for reliable and effective robotic manipulation of deformable objects that can take advantage of large amounts of demonstrations. In particular, we first record large amounts of data of humans demonstrating manipulation tasks, with both parallel jaws and multi-finger flexible hands. The recorded data would incorporate rich sensor readings including RGB/depth images, grasper/finger configurations, inertial profiles, and force/haptic feedbacks. Next, given a new scene, we find the detailed correspondence between the demonstrated scene and the new scene, and when necessary, estimate the physical properties of the deformable objects to be manipulated. Based on the computed correspondence, we then learn a function or policy which warps the demonstration scene onto the new scene. Generalization of the robot's manipulation motion is then achieved by also applying this warping function or policy to the robots' motions. We will demonstrate the effectiveness of our method by evaluating our results on several challenging tasks, including knot tying and placement, box assembly, coil winding, and filling a bag with cloths. These evaluations will both be implemented in simulation environment and on real robots. We expect the outcome of the proposed project to significantly advance the ability for robots to learn skills from demonstrations, and specially skills involving manipulation of deformable objects, such as rope, fabric, bags, and tissue. In the long term, this projects targets automation of selected surgical tasks, which could improve health care, and manufacturing tasks that involve deformable objects. 


Project number9042315
Grant typeGRF
Effective start/end date31/12/152/01/19