Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
11021164 | Neurocomputing | 2018 | 31 Pages |
Abstract
Chaetoceros is a dominant genus of marine planktonic diatoms with worldwide distribution. Due to the difficulty of extracting setae from Chaetoceros images, automatic segmentation of Chaetoceros is still a challenging task. In this paper, we address this difficult task by regarding the whole segmentation process as unsupervised pixel-wise classification without human participation. First, we automatically produce positive (object) and negative (background) samples for follow-up training, by combining the advantages of two image processing algorithms: Grayscale Surface Direction Angle Model (GSDAM) for extracting setae information and Canny for detecting cell edges from low-contrast and strong-noisy microscopic images. Second, we develop pixel-wise training by using the produced samples in the training process of Deep Convolutional Neural Network (DCNN). At last, the trained DCNN is used to label other pixels into object and background for final segmentation. We compare our method with eight mainstream segmentation approaches: Otsu's thresholding, Canny, Watershed, Mean Shift, gPb-owt-ucm, Normalized Cut, Efficient Graph-based method and GSDAM. To objectively evaluate segmentation results, we apply six well-known evaluation indexes. Experimental results on a new Chaetoceros image dataset with human labelled ground truth show that our method outperforms the eight mainstream segmentation methods in terms of both quantitative and qualitative evaluation.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Ning Tang, Fei Zhou, Zhaorui Gu, Haiyong Zheng, Zhibin Yu, Bing Zheng,