Automated 3-D micrograsping tasks performed by vision-based control

Lidai Wang, Lu Ren, James K. Mills, William L. Cleghorn

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

45 Citations (Scopus)

Abstract

We present a fully automated micrograsping methodology that uses a micro-robot and a microgripper to automatically grasp a micropart in three-dimensional (3-D) space. To accurately grasp a micropart in 3-D space, we propose a three-stage micrograsping strategy: (i) coarse alignment of a micropart with a microgripper in the image plane of a video camera system; (ii) alignment of the micropart with the microgripper in the direction normal to the image plane; (iii) fine alignment of the micropart with the microgripper in the image plane, until the micropart is completely grasped. Two different vision-based feedback controllers are employed to perform the coarse and fine alignment in the image plane. The vision-based feedback controller used for the fine alignment employs position feedback signals obtained from two special patterns, which can achieve submicron alignment accuracy. Fully automated micrograsping experiments are conducted on a microassembly robot. The experimental results show that the average alignment accuracy achieved during automated grasping is approximately ± 0.07 μm the time to complete an automated micrograsping task is as short as 7.9 seconds; and the success rate is as high as 94%. © 2010 IEEE.
Original languageEnglish
Article number5411942
Pages (from-to)417-426
JournalIEEE Transactions on Automation Science and Engineering
Volume7
Issue number3
DOIs
Publication statusPublished - Jul 2010
Externally publishedYes

Research Keywords

  • microassembly
  • Microelectromechanical systems
  • micrograsping
  • vision-based control

Fingerprint

Dive into the research topics of 'Automated 3-D micrograsping tasks performed by vision-based control'. Together they form a unique fingerprint.

Cite this