Article ID Journal Published Year Pages File Type
4946306 Knowledge-Based Systems 2017 17 Pages PDF
Abstract
High-dimensional and sparse (HiDS) matrices are frequently encountered in various industrial applications, due to the exploding number of involved entities and great needs to describe the relationships among them. Latent factor (LF) models are highly effective and efficient in extracting useful knowledge from these HiDS matrices. They well represent the known data of a HiDS matrix with high computational and storage efficiency. When building an LF model, the incorporation of linear biases has proven to be effect in further improving its performance on HiDS matrices in many applications. However, prior works all propose to assign a single bias to each entity, i.e., a single bias for each user/movie from a user-movie HiDS matrix. In this work we argue that to extend the linear biases, i.e., to assign multiple biases to each involved entity, can further improve an LF model's performance in some applications. To verify this hypothesis, we first extended the linear biases of an LF model, and then deduced the corresponding training rule of involved LFs. Subsequently, we conducted experiments on ten HiDS matrices generated by different industrial applications, evaluating the resulting LF models' prediction accuracy for the missing data of involved HiDS matrices. The experimental results indicate that on most testing cases an LF model needs to extend its linear biases to achieve the highest prediction accuracy. Hence, the number of linear biases should be chosen with care to make an LF model achieve the best performance in practice.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , , ,