| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 4944311 | Information Sciences | 2017 | 24 Pages |
Abstract
Visual and tactile sensing are complementary factors in the task of robotic grasping. In this paper, a grasp detection deep network is first proposed to detect the grasp rectangle from the visual imageï¼ then a new metric using tactile sensing is designed to assess the stability of the grasp. By means of this scheme, a THU grasp datasetï¼ which includes the visual information, corresponding tactile and grasp configurationsï¼ is collected to train the proposed deep network. Experiments results have demonstrated that the proposed grasp detection deep networks outperform other mainstream approaches in a public grasp dataset. Furthermoreï¼ the grasp success rate can be improved significantly in real world scenarios. The trained model has also been successfully implemented in a new robotic platform to perform the robotic grasping task in a cluttered scenario.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Di Guo, Fuchun Sun, Bin Fang, Chao Yang, Ning Xi,
