کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
405533 | 677666 | 2012 | 13 صفحه PDF | دانلود رایگان |

Based on the reduced SVM, we propose a multi-view algorithm, two-teachers–one-student, for semi-supervised learning. With RSVM, different from typical multi-view methods, reduced sets suggest different views in the represented kernel feature space rather than in the input space. No label information is necessary when we select reduced sets, and this makes applying RSVM to SSL possible. Our algorithm blends the concepts of co-training and consensus training. Through co-training, the classifiers generated by two views can “teach” the third classifier from the remaining view to learn, and this process is performed for each choice of teachers–student combination. By consensus training, predictions from more than one view can give us higher confidence for labeling unlabeled data. The results show that the proposed 2T1S achieves high cross-validation accuracy, even compared to the training with all the label information available.
► A new multi-view method blends the ideas of co-training and consensus training.
► Our method alternately performs the consensus training and co-training for SSL.
► Reduced sets play varied views in the feature space rather than in the input space.
► We select the representative reduced sets (views) with the IRSVM algorithm.
► Our SSL scheme has comparable test accuracy to the SL with all the label information.
Journal: Neural Networks - Volume 25, January 2012, Pages 57–69