Article ID Journal Published Year Pages File Type
4628704 Applied Mathematics and Computation 2013 13 Pages PDF
Abstract

Probabilistic reasoning and learning with permutation data has gained interest in recent years because its use in different ranking-based real-world applications. Therefore, constructing a model from a given set of permutations or rankings has become a target problem in the machine learning community. In this paper we focus on probabilistic modelling and concretely in the use of a well known permutation-based distribution as it is the Mallows model.Learning a Mallows model from data requires the estimation of two parameters, a consensus permutation π0π0 and a dispersion parameter θ  . Since the exact computation of these parameters is an NP-hard problem, it is natural to consider heuristics to tackle this problem. An interesting approach consists in the use of a two-step procedure, first estimating π0π0, and then computing θ   for a given π0π0. This is possible because the optimal π0π0 does not depend on θ  . When following this approach, computation of π0π0 reduces to the rank aggregation problem, which consists in finding the ranking which best represents such dataset.In this paper we propose to use genetic algorithms to tackle this problem, studying its performance with respect to state-of-the-art algorithms, specially in complex cases, that is, when the number of items to rank is large and there is few consensus between the available rankings (which traduces in a low value for θ).After a series of experiments involving data of different type, we conclude that our evolutionary approach clearly outperforms the remaining tested algorithms.

Related Topics
Physical Sciences and Engineering Mathematics Applied Mathematics
Authors
, , ,