Article ID Journal Published Year Pages File Type
5469646 Procedia CIRP 2017 5 Pages PDF
Abstract
Vitreoretinal surgery tasks are difficult even for expert surgeons. Therefore, an eye-surgery robot has been developed to assist surgeons in performing such difficult tasks accurately and safely. In this paper, the autonomous positioning of a micropipette mounted on an eye-surgery robot is proposed; specifically, the shadow of the micropipette is used for positioning in the depth direction. First, several microscopic images of the micropipette and its shadow are obtained, and the images are manually segmented into three regions, namely, the micropipette, its shadow, and the eye ground regions. Next, each pixel of the segmented regions is labeled, and labeled images are used as ground-truth data. Subsequently, the Gaussian Mixture Model (GMM) is used by the eye surgery robot system to learn the sets of the microscope images and their corresponding ground-truth data using the HSV color information as feature values. The GMM model is then used to estimate the regions of the micropipette and its shadow in a real-time microscope image as well as their tip positions, which are utilized for the autonomous robotic position control. After the planar positioning is performed using the visual servoing method, the micropipette is moved to approach the eye ground until the distance between the tip of the micropipette and its shadow is either equal to or less than a predefined threshold. Thus, the robot could accurately approach the eye ground and safely stop before contact. An autonomous positioning task is performed ten times in a simulated eye-surgery setup, and the robot stops at an average height of 1.37 mm from a predefined target when the threshold is 1.4 mm. Further enhancement in the estimation accuracy in the image processing would improve the positioning accuracy and safety.
Related Topics
Physical Sciences and Engineering Engineering Industrial and Manufacturing Engineering
Authors
, , , , , , , , , , , , , , ,