Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
407326 | Neurocomputing | 2012 | 13 Pages |
Multi-view learning was supposed to process data with multiple information sources. Our previous work extended multi-view learning and proposed one effective learning machine named MultiV-MHKS. MultiV-MHKS firstly changed a base classifier into M different sub-classifiers, and then designed one joint learning process for the generated M sub-ones. Each sub-classifier was taken as one view of MultiV-MHKS. However, MultiV-MHKS assumed that each sub-classifier should play an equal role in the ensemble. Thus the weight values rqrq, q=1…Mq=1…M for each sub-classifier were set to the equal value. In practice, this hypothesis was neither flexible nor appropriate since rqs should reflect different effects of their corresponding views. In order to make rqs flexible and appropriate, in this paper we propose a regularized multi-view learning machine named RMultiV-MHKS with the optimized rqs. In this case, we optimize rqs through using the Response Surface Technique (RST) on cross-validation data and thus can obtain a regularized multi-view learning machine. Doing so can assign a certain view with zero weight in the combination, which means that this specific view does not carry discriminative information for the problem and hence can be pruned. The experimental results here validate the effectiveness of the proposed RMultiV-MHKS and meanwhile explore the effect of some important parameters. The characters of the RMultiV-MHKS are: (1) distributing more weight to the favorable views which can reflect the property of the problem; (2) owning a tighter generalization risk bound than its corresponding single-view learning machine in terms of the Rademacher complexity; (3) having a statistically superior classification performance to the original MultiV-MHKS.