|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|383537||660826||2016||16 صفحه PDF||سفارش دهید||دانلود رایگان|
این مقاله ISI می تواند منبع ارزشمندی برای تولید محتوا باشد.
- تولید محتوا برای سایت و وبلاگ
- تولید محتوا برای کتاب
- تولید محتوا برای نشریات و روزنامه ها
• A novel multi-objective differential evolution algorithm based classifier ensemble for text sentiment classification.
• An empirical comparison of weighted and unweighted voting schemes.
• Extensive empirical analysis on metaheuristic based voting schemes for sentiment analysis.
• High classification accuracies for text sentiment classification (98.86% for Laptop dataset).
Typically performed by supervised machine learning algorithms, sentiment analysis is highly useful for extracting subjective information from text documents online. Most approaches that use ensemble learning paradigms toward sentiment analysis involve feature engineering in order to enhance the predictive performance. In response, we sought to develop a paradigm of a multiobjective, optimization-based weighted voting scheme to assign appropriate weight values to classifiers and each output class based on the predictive performance of classification algorithms, all to enhance the predictive performance of sentiment classification. The proposed ensemble method is based on static classifier selection involving majority voting error and forward search, as well as a multiobjective differential evolution algorithm. Based on the static classifier selection scheme, our proposed ensemble method incorporates Bayesian logistic regression, naïve Bayes, linear discriminant analysis, logistic regression, and support vector machines as base learners, whose performance in terms of precision and recall values determines weight adjustment. Our experimental analysis of classification tasks, including sentiment analysis, software defect prediction, credit risk modeling, spam filtering, and semantic mapping, suggests that the proposed classification scheme can predict better than conventional ensemble learning methods such as AdaBoost, bagging, random subspace, and majority voting. Of all datasets examined, the laptop dataset showed the best classification accuracy (98.86%).
Journal: Expert Systems with Applications - Volume 62, 15 November 2016, Pages 1–16